Association for Computing Machinery
Welcome to the December 5, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Correction: In the November 18, 2016 issue of TechNews, Vasileios Kemerlis was mistakenly identified as a Columbia University professor; he actually is a member of the faculty at Brown University. We regret the error.

ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


100 Million Students Worldwide Will Learn to Code This Week for Hour of Code
TechRepublic (12/05/16) Alison DeNisco

Today marks the beginning of Computer Science Education Week, and more than 100 million students worldwide will participate in Hour of Code, a 60-minute introduction to computer programming. Hour of Code, initiated by Code.org in 2013, has inspired about 400,000 classrooms to teach computer science classes, says Code.org CEO Hadi Partovi. More than 400 technology and education organizations and 200,000 teachers are Hour of Code participants. "The ultimate success of the Hour of Code is in changing the stereotype of who can learn computer science, and getting schools to make computer science part of the formal curriculum," Partovi says. He notes online image searches for coders or tech CEOs typically yield a lot of white men, but running a search on students coding returns a broad sample of students of all races and ages, and Hour of Code has played a significant role in that. The initiative could help close a wide U.S. gap in qualified employees skilled in computer science, as well as the gender gap in technology, says Women Who Code CEO Alaina Percival. She notes in addition to giving young women an unrestricted window into technology engagement, Hour of Code can offer a chance for mentors and teachers to serve as role models.


Google DeepMind Makes AI Training Platform Publicly Available
Bloomberg (12/05/16) Jeremy Kahn

Alphabet's Google DeepMind is making its game platform available to the general public on GitHub. DeepMind is open sourcing the entire source code for its training environment, which was previously called Labyrinth and has now been renamed DeepMind Lab, so anyone can download the code and customize it to help train their own AI systems. In addition, Google researchers will create new game levels for DeepMind Lab and upload them to GitHub. Putting the DeepMind Lab code on GitHub will enable other researchers to see if its developer's own breakthroughs can be replicated, enabling outside scientists to measure the performance of their own AI agents, says DeepMind co-founder Shane Legg. The AI agent in the DeepMind Lab controls a hovering sphere through a "first person" point of view, enabling it to look and move in any direction. The decision to open source the DeepMind Lab follows a similar move by OpenAI. OpenAI also recently announced it will make public an interface called Universe, which lets an AI agent use a computer like a human does, by looking at screen pixels and operating a virtual keyboard and mouse. Universe serves as a go-between that enables an AI system to learn the skills needed to play games or operate other applications.


Phantom Movements in Augmented Reality Helps Patients With Chronic Intractable Phantom Limb Pain
MyNewsdesk (12/02/16)

A new method developed by Max Ortiz Catalan at the U.K.'s Chalmers University of Technology can help alleviate amputees' phantom limb pain. The approach, called phantom motor execution, relies on machine learning and augmented reality. Electrodes on the skin pick up electric signals in the muscles, and artificial intelligence algorithms translate them into movements of a virtual arm in real time. Patients see themselves on a screen with the virtual arm and can control it like a biological arm. In testing on more than a dozen amputees, the technique reduced phantom limb pain by about 50 percent. The virtual representation enables patients to reactivate areas of the brain used to move the arm before it was amputated, and might be the reason the phantom pain decreased. "The results are very encouraging, especially considering that these patients had tried up to four different treatment methods in the past with no satisfactory results," Ortiz Catalan says. "In our study, we also saw that the pain continuously decreased all the way through to the last treatment. The fact that the pain reduction did not plateau suggests that further improvement could be achieved with more sessions."


Security Experts Warn Congress That the Internet of Things Could Kill People
Technology Review (12/05/16) Mike Orcutt

Prominent computer security experts in November warned the U.S. Congress that a poorly secured Internet of Things (IoT) could seriously threaten life and property. Harvard University fellow Bruce Schneier said the massive denial-of-service attack in October targeting Internet infrastructure provider Dyn, which relied on a botnet of hacked devices, demonstrated the "catastrophic risks" presented by the spread of insecure online appliances. Experts also said IoT insecurity is getting worse because device makers have no incentives to make security a priority, compounded by a lack of safety metrics. University of Michigan professor Kevin Fu warned the risk of serious consequences is escalating as IoT devices penetrate hospitals and other sensitive sectors, and millions of them can be easily compromised and formed into botnets that target institutions. Fu urged the government to develop an independent body tasked with testing IoT device security. There is wide agreement that the government must take action to address this threat, but several business groups have resisted the prospect of IoT regulation out of concern it could hamper innovation. Schneier recommended establishing a centralized agency to govern cybersecurity.


What Makes Bach Sound Like Bach? New Dataset Teaches Algorithms Classical Music
UW Today (11/30/16) Jennifer Langston

University of Washington (UW) researchers released MusicNet, a dataset that could teach machine-learning algorithms the basics of classical music. MusicNet has curated annotations so machine-learning scientists and algorithms can meet challenges such as note prediction, automated music transcription, and recommending music based on the structure of a song a person likes. MusicNet consists of 330 classical music recordings with annotated labels indicating the start and stop time of each individual note, what instrument plays the note, and its place in the composition's metrical structure. The dataset can train algorithms to deconstruct, understand, anticipate, and reassemble elements of classical music. "We're interested in what makes music appealing to the ears, how we can better understand composition, or the essence of what makes Bach sound like Bach," says UW professor Sham Kakade. "It can also help enable practical applications that remain challenging, like automatic transcription of a live performance into a written score." By applying dynamic time warping to classical music performances, the UW team synched a real performance to a synthesized version of the same piece that already featured the desired musical notations and scoring in digital form. Plotting that digital scoring back onto the original performance provides the exact timing and details of every note to ease algorithmic learning.


How the Brain Recognizes Faces
MIT News (12/01/16) Larry Hardesty

A new computational model of the human brain's face-recognition mechanism duplicates the mirror-symmetric responses in the intermediate visual-processing regions. Previous models have overlooked this aspect of human neurology, but Massachusetts Institute of Technology (MIT) researchers and colleagues designed a machine-learning system that implemented their model, and trained it to recognize particular faces by feeding it a sample of images. The researchers say the trained system included an intermediate processing step that represented a face's degree of rotation, but not the direction. The team did not build this mechanism into the system, but they say it emerged spontaneously from the training process. "This model was trying to explain invariance, and in the process, there is this other property that pops out," says MIT professor Tomaso Poggio, director of the Center for Brains, Minds, and Machines (CBMM). The researchers believe this indicates their system and the brain are doing something similar. "[This is] a nice illustration of what we want to do in [CBMM], which is this integration of machine learning and computer science on one hand, neurophysiology on the other, and aspects of human behavior," Poggio says. "That means not only what algorithms does the brain use, but what are the circuits in the brain that implement these algorithms."


Google Translate AI Invents Its Own Language to Translate With
New Scientist (11/30/16) Sam Wong

Google Translate has begun using a neural network to translate between languages. The network works on entire sentences at once, giving it more context to come up with the best translation. The system surprisingly created its own artificial language to help it translate between language pairs on which it has not been explicitly trained, a capability that could enable Google to quickly scale the system to translate between a large number of languages. Google researchers hypothesize the system achieved this breakthrough by finding a common ground in which sentences with the same meaning are represented in similar ways regardless of language. The researchers note this means the system has created a new common language specific to the task of translation and not readable or usable for humans. Although this zero-shot translation strategy does not perform as well as translating via an intermediary language, the field is progressing rapidly and Google's results can be built upon by the wider research community and industry. "I have no doubt that we will be able to train a single neural machine-translation system that works on 100-plus languages in the near future," says New York University professor Kyunghyun Cho, who has worked on a similar study with researchers at Germany's Karlsruhe Institute of Technology.


Team Finds New Method to Improve Predictions
Phys.org (11/30/16)

Researchers at Princeton, Columbia, and Harvard universities have developed a new method to analyze big data that better predicts outcomes in healthcare, politics, and other fields. In an effort to reduce the error rate in traditional methods, the researchers proposed a new measure called the influence score (I-score) to better measure a variable's ability to predict. The researchers found the I-score is effective in differentiating between noisy and predictive variables in big data and can significantly improve the prediction rate. The I-score can be applied to a variety of fields, including terrorism, civil war, elections, and financial markets. "Essentially, anytime you might be interested in predicting and identifying highly predictive variables, you might have something to gain by conducting variable selection through a statistic like the I-score, which is related to variable predictivity," says Princeton postdoctoral researcher Adeline Lo. "That the I-score fares especially well in high dimensional data and with many complex interactions between variables is an extra boon for the researcher or policy expert interested in predicting something with large dimensional data."


Suggestions for You: a Better, Faster Recommendation Algorithm
SFI News Center (11/30/16) John German

Santa Fe Institute (SFI) researchers have introduced a new online recommendation system they say is faster and more accurate than existing algorithms that predict users' likes and dislikes based on items they previously rated. Current approaches use mathematical models that assume people belong to single groups with similar preferences, but SFI scientist Cristopher Moore says these algorithms are too simplistic and cannot reflect a user's unique mix of interests. Moore and his team created a system that enables individuals and the items they rate to belong to different, overlapping groups. The algorithm also does not assume ratings are only a function of similarity; instead, rating distributions are predicted based on the multiple groups to which the individual or item belongs. Unlike the models that assume a linear relationship between users and items, the new model can learn nonlinear relationships over time. Researchers tested the algorithm on five large datasets, including recommendation systems for songs, movies, and romantic partners. The model's predicted ratings were more accurate in each case than those generated by existing systems. "Our algorithm is powerful because it is mathematically clear," Moore says. "That makes it a valuable part of the portfolio of methods engineers can use."


3D, High-Res Maps of the Last Frontier
Government Computer News (11/30/16) Kathleen Hickey

The U.S. National Science Foundation, the National Geospatial-Intelligence Agency, and several academic and private-sector partners have released the first unclassified three-dimensional (3D) topographic maps of Alaska. The maps, known as digital elevation models (DEMs), will aid researchers in studying a wide range of issues, including ice loss, glacial changes, and surface water flow. The maps are the first part of the ArcticDEM project, which was created after a January 2015 executive order to improve decision-making in the region. The data is fed into the University of Illinois' Blue Waters supercomputer, which uses algorithms from Ohio State University to construct 3D images based on comparisons of stereo pairs of two-meter resolution imagery. The new maps have a resolution of five meters or less, while current elevation models for the Arctic have a resolution of one kilometer. "This is quite an astounding thing to be producing topography for 30 degrees of the globe with supercomputers and one-foot resolution satellites," says Paul Morin, head of the University of Minnesota's Polar Geospatial Center. The project will be used to study microclimates, anticipating and addressing permafrost impacts and rising sea levels, potential storm-surge effects from rising water levels, wildlife and ecosystem management, and disaster management for Arctic coastal communities.


Engineering Researchers Develop a Process that Could Make Big Data and Cloud Storage More Energy Efficient
VCU News (11/29/16) Rebecca Jones

Virginia Commonwealth University (VCU) researchers have developed a process for flipping the magnetic polarity of magnetic particles, offering a significant reduction in the energy required for big data and cloud computing memory storage. The new system uses an electric field to reverse the direction of magnetic skyrmions, a type of magnetic state characterized by a core that points either upward or downward, and progressively rotates from its core to its periphery. The researchers found that an electric field can bring about a flip in core magnetization, presenting the possibility of more energy-efficient magnetic memory for computing. "The exciting thing about this kind of magnetic encoding is that it only takes a small amount of energy to flip and once you have the direction you want, it can stay there for a long time," says VCU doctoral researcher Dhritiman Bhattacharya. The researchers now want to investigate how this process works in the presence of thermal noise at room temperature. The team also will determine how controlled the process is in order to assess whether the polarities can be reversed every time in the presence of such disturbances.


Designing Agile Human-Machine Teams
U.S. Defense Advanced Research Projects Agency (11/28/16)

The U.S. Defense Advanced Research Projects Agency's (DARPA) recently announced Agile Teams (A-Teams) program is designed to discover, test, and illustrate predictive and generalizable mathematical techniques for enabling optimized design of agile hybrid teams. "A-Teams is focused not on developing new [artificial intelligence] technologies per se, but on developing a framework for optimizing the use of smart machines in various roles together with humans to ensure optimal human-machine teamwork for solving dynamic problems," says DARPA program manager John Paschkewitz. A-Teams will primarily concentrate on mathematical methods for designing optimal hybrid teams to be demonstrated and validated in dynamic and complex problem-solving contexts using experimental testbeds. Among the various forms the smart machines could adopt are machine agents that can execute peer-level interaction with humans for meeting team goals, or intelligent problem-solving workspaces that can coordinate communications and task assignment to optimize team performance. Expected products of A-Teams include abstractions, algorithms, and architectures for a machine-based "intelligent fabric" that can dynamically close gaps in ability, enhance team decision-making, and expedite realizing collective goals. Non-military fields that could benefit from the program's results include scientific and drug discovery, software engineering, logistics planning, advanced hardware engineering, and intelligence forecasting.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]