Welcome to the January 29, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.
Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
Internet Voting Is Just Too Hackable, Say Security Experts
USA Today (01/28/16) Elizabeth Weise
Several ballot initiatives have been proposed in California to require the state to permit online voting, but security experts and voting officials speaking at the Enigma 2016 event on Wednesday warned the technology is far from hack-proof. "Imagine the incentives of a rival country to come in and change the outcome of a vote for national leadership," said University of Michigan professor J. Alex Halderman. "Elections require correct outcomes and true ballot secrecy." Experts noted there have been multiple tries to build verifiable and secure online voting systems over the past two decades, all of which have fallen short. Halderman and his students successfully compromised some of those systems, including the iVote system used in an election last year by the state of New South Wales in Australia. Halderman said his team and experts from the University of Melbourne successfully installed vote-stealing malware in a matter of days. "Voting over the Internet is a really bad idea," said University of Melbourne professor Vanessa Teague. "We haven't yet solved important issues like authentication, dealing with malware, ensuring privacy, and allowing voters to verify their votes."
Alphabet Program Beats the European Human Go Champion
The New York Times (01/27/16) John Markoff
DeepMind researchers at Google's Alphabet subsidiary announced on Wednesday their AlphaGo program beat European Go champion Fan Hui in a series of five matches and achieved a 99.8-percent winning rate against other Go programs. Go is seen as a formidable test for artificial intelligence researchers because it is much more complex than chess, with a larger range of possible positions that require more sophisticated strategy and reasoning. "The reason games are used as a testing ground is that they're kind of like a microcosm of the real world," says DeepMind Technologies founder Demis Hassabis. AlphaGo mixes a deep-learning algorithm with a Monte Carlo algorithm designed to rigorously explore large numbers of possible combinations of moves. The DeepMind researchers say they also trained the program using input from expert human players. "The machine has continued to get better," Hassabis notes. "We haven't hit any kind of ceiling yet on performance." Alphabet has announced a March tournament between AlphaGo and current Go champion Lee Sedol, in which they will compete for a $1-million prize.
Companies Find Tech Talent in Robust Freelance Market
The Wall Street Journal (01/27/16) Steven Norton
How DARPA Took on the Twitter Bot Menace With One Hand Behind Its Back
Technology Review (01/28/16)
Last year, the U.S. Defense Advanced Research Projects Agency (DARPA) set out to find a way to identify influence bots on Twitter by holding a competition in which teams were asked to spot bots in a stream of posts on the topic of vaccinations. The winning team helped demonstrate some significant new strategies for identifying bots in the real world. In the competition, the teams had to analyze the Twitter stream and guess which users were bots. Teams received bonus points for identifying the bots sooner, because DARPA is particularly interested in the early detection of influence bots. The winning team correctly identified all the bots 12 days ahead of the deadline while making only one incorrect guess. The most successful strategies involved identifying an initial set of bots in the data, but none of the teams were able to automate this step and most used significant human input. The winning team used a pretrained algorithm to search for bot-like behavior, focusing on unusual grammar, the similarity of linguistics to natural-language chatbots, and unusual behavior such as extended periods of tweeting without a break; this method revealed four accounts that were clearly bots, and the team used these to find others.
Toward a Converged Exascale-Big Data Software Stack
HPC Wire (01/28/16) Tiffany Trader
As the leader of Argonne's Argo exascale software project and a key organizer of the workshop series on Big Data and Extreme-scale Computing, computer scientist Pete Beckman and his colleagues are pushing a new era in research computing, in which a single machine can fulfill the needs of the extreme-scale simulation and data analysis communities. "The big data community...has very similar needs to the [high-performance computing] community, but it's not currently exactly aligned," Beckman says. Argo is an exascale-concentrated framework designed from the bottom up to handle both emerging and future needs of these communities by achieving a balance between reusing software stack elements where it is sensible and adding custom efforts when it matters. "We're leveraging Linux components, and then adding in those pieces of technology that high-performance computing applications need: special kinds of high-performance computing containers, special kinds of power management components that allow us to adjust the electrical power on each node so that we stay within a power budget, and ways to think about concurrency and millions and millions of lightweight threads," Beckman says. He notes this convergence is being fueled by the potential cost savings as well as speed and capability upgrades that stem from conducting large-data analysis and simulations concurrently.
NIST Looks to Strengthen Crypto Backbone
Federal Computer Week (01/28/16) Sean Lyngaas
The U.S. National Institute of Standards and Technology (NIST) is looking to make random bit generators (RBGs), which serve as the backbone of cryptography, less predictable. NIST recently released the second draft of a publication that specifies design principles for sources of entropy, which measure the randomness of generated numbers. Without a reliably random RBG, hackers can access a user's communications. "Security flaws in random number generators have been a significant source of vulnerabilities in cryptographic systems over many years, so it's crucially important to have random number generators that work well," says Paul Kocher at Rambus' Cryptography Research Division. The NIST draft specifies data cryptographers can submit for entropy testing, and describes the process of calculating initial entropy estimates. In addition, the draft details how multiple noise sources of entropy can be factored into the entropy calculation. The NIST recommendation is one of three related cryptographic documents; another document specifies random number-generation algorithms, while a third demonstrates how to pair the algorithms with entropy sources into sound random number generators. "These drafts from NIST are uncontroversial, and don't have controversial constructions of the sort found in Dual EC DRBG that can harbor backdoors," Kocher notes.
CSI: Cyberattack Scene Investigation--a Malware Whodunit
Scientific American (01/28/16) Larry Greenemeier
Forensic probes of cyberattacks can uncover their modus operandi and severity, but finding perpetrators is a difficult proposition. "Attribution is a curious beast," notes Morgan Marquis-Boire, a researcher at the University of Toronto's Citizen Lab. "There are a variety of techniques that you can use to make educated assertions about the nature of an attack." Marquis-Boire says circumstantial evidence can be furnished via an analysis of the refinement of the tools used, the methods, the type of data stolen, and where it was transmitted. A forensic investigation often starts with investigators analyzing infected computers and the malware that compromised them. Malware that uses a lot of customized code implies a skilled, well-equipped coder with considerable knowledge about the computers and network targeted, while the use of more generic or open source code makes attribution harder because such code lacks distinguishing characteristics that might be traced back to a specific programmer or organization. Marquis-Boire and colleagues are developing new malware profile-building techniques so they can identify a particular program's formatting styles, how it apportions memory, the ways it attempts to evade detection, and other traits. Other researchers are automating programmer-malware matching via machine learning.
Computer Science, Meet Humanities: in New Majors, Opposites Attract
The Chronicle of Higher Education (01/28/16) Corinne Ruff
Many institutions of higher education aim to combine computer science with humanities courses as computing skill becomes an increasingly necessary ingredient for a student's career track, according to the U.S. National Science Foundation's (NSF) Janice Cuny. One school following this strategy is Stanford University, which founded a new major called CS+Music as part of a pilot degree program called CS+X, which blends subjects such as neuroscience, art, natural language processing, and the ancient world. Ge Wang at Stanford's Center for Computer Research in Music and Acoustics says the establishment of a CS+Music degree was "a no-brainer," noting the major "offers a blank slate to reimagine what we can do with computing while using the soul of the discipline itself. It isn't to replace it, but to augment art investigation with computation." Fellow Stanford professor Giovanna Ceserani also is experimenting with a CS+X degree, teaching a class on the use of geographic information systems to enhance learning of classical history. NSF's Jim Kurose says CS+X degrees may hold more appeal for students who want to use data collection to analyze subjects such as politics, society, and the environment. "It takes the notion [of majors with a humanities component] to the next level with a much deeper study of the application of computational thinking within a discipline," he notes.
Recognizing Correct Code
MIT News (01/29/16) Larry Hardesty
Massachusetts Institute of Technology (MIT) researchers have developed Prophet, a machine-learning system that can analyze the repairs made to open source programs and learn their general properties to produce new repairs for a different set of programs. "One of the most intriguing aspects of this research is that we've found that there are indeed universal properties of correct code that you can learn from one set of applications and apply to another set of applications," says MIT professor Martin Rinard. The researchers used patches to write a computer script that automatically extracted both the uncorrected code and patches for 777 errors in eight common open source applications stored in GitHub. For the machine-learning system's feature set, the researchers focused on values stored in memory, identifying 30 prime characteristics of a given value. They wrote a program that evaluated all the possible relationships between these characteristics in successive lines of code, producing more than 3,500 such relationships in the feature set. The algorithm then tried to determine what combination of features most consistently predicted the success of a patch. Prophet works in conjunction with an earlier algorithm, ranking proposed modifications according to the probability they are correct before putting them through time-consuming tests.
How Many Ways Can You Arrange 128 Tennis Balls? Researchers Solve an Apparently Impossible Problem
University of Cambridge (01/27/16) Tom Kirk
University of Cambridge researchers have developed software that determined how many ways to pack 128 soft spheres, a figure that vastly exceeds the total number of particles in the universe. The method they used can help scientists to calculate configurational entropy, which describes how structurally disordered the particles in a physical system are. A reliable method for calculating configurational entropy could be used to answer a range of seemingly impossible problems, especially those related to granular physics. "This research performs the sort of calculation we would need in order to be able to do that," says University of Cambridge researcher Stefano Martiniani. He notes these types of calculations are so complicated they have been dismissed as hopeless for any system involving more than about 20 particles. The Cambridge solution involved taking a small sample of all possible configurations and working out the probability of them occurring, or the number of arrangements that would lead to those particular configurations appearing. Based on these samples, the researchers were able to extrapolate how many ways the entire system could be arranged, as well as how ordered one state was compared to the next. "This methodology could be used anywhere that people are trying to work out how many possible solutions to a problem you can find," Martiniani says.
Prof Awarded $10M Grant for Computational Sustainability Work
Cornell Daily Sun (NY) (01/28/16) Zachary Silver
Cornell University professor Garla Gomes received a $10-million Expeditions in Computing grant from the U.S. National Science Foundation (NSF), supporting interdisciplinary, multi-investigator research teams working on transformative computing and technology. "This second NSF Expeditions Award is a validation for us of our initial, highly ambitious vision to create Computational Sustainability as a new subfield [in] computer science," Gomes says. NSF grants have been used to develop the research network called CompSusNet, led by Gomes and her team at Cornell. CompSustNet connects computer scientists and sustainability researchers on a national and international scale. "CompSustNet is a large-scale collaborative research network, consisting of 12 academic institutions and over 20 collaborating institutions," Gomes says. "[It] will further nurture and expand the horizons of the nascent field of Computational Sustainability." She says the new grant will be used to transfer computational sustainability research into policy and decision-making for sustainability with direct real-world impact. "Our effort will help further establish Cornell University as a research leader in both computing and sustainability," Gomes says.
Let Them See You Sweat: What New Wearable Sensors Can Reveal From Perspiration
Berkeley News (01/27/16) Sarah Yang
University of California, Berkeley researchers say they have developed a wearable sensor system that can measure metabolites and electrolytes in human perspiration, calibrate the data based on skin temperature, and sync the results in real time to a smartphone. The researchers describe their invention as the first fully integrated electronic system that can supply continuous, non-invasive monitoring of multiple biochemicals in sweat. The prototype bundles five sensors onto a flexible circuit board, and they measure metabolites glucose and lactate, electrolytes sodium and potassium, and skin temperature. Next to the sensor array is a wireless printed circuit board with off-the-shelf silicon elements. The researchers used more than 10 integrated circuit chips to take the measurements from the sensors, amplify the signals, adjust for temperature changes, and wirelessly transmit the data. An app syncs the data from the sensors to mobile phones, and the device was fitted onto "smart" wristbands and headbands and tested on dozens of volunteers as they performed exercises. "With this non-invasive technology, someday it may be possible to know what's going on physiologically without needle sticks or attaching little, disposable cups on you," says Berkeley professor George Brooks. He also thinks the technology could be adapted to measure other body fluids for sick or injured people.
Are We Thinking About Artificial Intelligence All Wrong?
Government Computer News (01/28/16) Troy K. Schneider
Computer scientist and author Jerry Kaplan contends a rethink of artificial intelligence (AI) may be necessary, noting public discourse has been distorted by an overemphasis on the concept of machines and software becoming so intelligent they can replace human employees in virtually any capacity. "There's very little evidence that machines are on the path to becoming thinking, sentient beings," Kaplan notes. He argues there are tasks humans use intelligence for, which machines could perform without thinking in a human manner--such as translation and text analysis. Kaplan foresees job destruction accelerated by AI advances being accompanied by job transformation, as new technologies and their attendant disruption will eventually generate new jobs. Moreover, he says federal technologists should embrace such changes, as the boom in sensor-driven data and machine learning could make many government operations and agencies dramatically more effective. Kaplan acknowledges automation leads to job loss, and says although it is not the job of innovators to take care of the people they are displacing, "somebody else has to stand up. And that somebody either is government or is facilitated by government policies."
Abstract News © Copyright 2016 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.