Welcome to the September 13, 2017 edition of ACM TechNews, providing timely information for IT professionals three times a week.

ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

To view "Headlines At A Glance," hit the link labeled "Click here to view this online" found at the top of the page in the html version. The online version now has a button at the top labeled "Show Headlines."
China Building World's Biggest Quantum Research Facility
South China Morning Post (Hong Kong)
Stephen Chen
September 11, 2017


China plans to develop a quantum computer and other "revolutionary" technologies for military applications by building the world's largest quantum research facility, according to researchers and authorities involved in the initiative. Leading Chinese quantum scientist Pan Jianwei says the National Laboratory for Quantum Information Science will play an important role in China's ambition to achieve "quantum supremacy" by 2020, "with calculation power 1 million times to all existing computers around the world combined." The Chinese Academy of Sciences' Guo Guoping says developing a general-purpose quantum computer could take even longer. He notes a large facility with centralized resources could expedite such breakthroughs by pooling the collective, multidisciplinary skills and expertise of researchers across China. However, Guo strongly doubts a code-breaking quantum computer could be developed by 2020, and says it is more likely that researchers worldwide will have created primitive quantum systems for handling some specific tasks.

Full Article

Scan of fingerprint How You Handle Your Phone Gives Away More Than You Think
CORDIS News
September 12, 2017


The European Union-funded Enhanced Mobile Biometrics (AMBER) project has demonstrated the ability to recognize a smartphone user's gender by segmenting gestures and analyzing how users swipe screens using multiple datasets. The AMBER team used machine-learning analysis to verify the potential of gender prediction from swipe-gesture data, obtaining a 78-percent accuracy rate using data from two distinct directions. The researchers highlighted 14 parameters in their analysis of the swipe data, including average speed, arc distances, angles to start and end, area, and length. Participants operated the smartphone one-handed in portrait orientation, using the thumb of the same hand to engage with the screen. The researchers are developing software that identifies users from swipe gestures, and accounts for the orientation in which they hold the phone and the way the phone moves when it is carried. The team says the research should lead to customized touchscreen interaction and better continuous authentication.

Full Article
New Research May Improve Communications During Natural Disasters
Georgia Tech News Center
Ben Snedeker
September 8, 2017


Researchers at the Georgia Institute of Technology (Georgia Tech) have proposed a new way of gathering and sharing information during natural disasters that does not rely on the Internet. Their method involves using the computational power built into mobile phones, routers, and other hardware to create a network that emergency managers and first responders can use to share and act on information collected from people affected by disasters. The researchers say it could be possible to access these centralized services with a decentralized network that leverages the growing amount of power through edge computing. The Georgia Tech team demonstrated that by harnessing edge computing resources, sensing devices can identify and communicate with other sensors in the area. "This capability will provide first responders and others with the level of situational awareness they need to make effective decisions in emergency situations," say Georgia Tech professor Kishore Ramachandran.

Full Article

X-ray of heart with pacemaker Pacemaker Recall Exposes National Need for Research and Education in Embedded Security
CCC Blog
Helen Wright
September 8, 2017


The U.S. Food and Drug Administration's first major recall of pacemakers due to a cybersecurity risk highlights a national need for computing research and education on embedded security--an essential pillar in the Internet of Things, according to Computing Community Consortium Cybersecurity Task Force chair Kevin Fu. "As devices and systems are increasingly interconnected, security has become a critical property of their embedded hardware and software," says the National Academy of Engineering's Sam H. Fuller. Fu notes colleagues from various universities are exploring constructive strategies to enhance embedded security for healthcare as part of the U.S. National Science Foundation's Trustworthy Health and Wellness Frontiers Project. "Academic partnerships with industry are key to improving embedded security," Fu says. He cites Cornell University's Greg Morrisett, who believes "wherever computation can have a direct effect on human safety, and especially where that computation is connected to the broader Internet, we desperately need new research ideas and new practical methodologies for gaining assurance."

Full Article

Robot in warehouse lifting a box In the Future, Warehouse Robots Will Learn on Their Own
The New York Times
Cade Metz
September 10, 2017


Researchers at the University of California, Berkeley and elsewhere are teaching robots to learn to perform tasks on their own, which could have wide-ranging applications in warehousing and other industries. "We're learning from simulated models and then applying that to real work," says Berkeley professor Ken Goldberg. The Berkeley team's experiments, in which a two-armed robot learns to pick up and grip different objects, use software that demonstrate a new application for neural networks. The researchers first mined the Internet for computer-aided design models for use in generating various digital objects for a massive database, simulated each item's physics, and then fed the data to a neural network plugged into the robot. The robot was able to identify points where the arms should pick up each object. When the team fed simulated piles of random objects to the network, the robot also could learn to lift objects from physical piles.

Full Article
*May Require Free Registration
How Neural Networks Think
MIT News
Larry Hardesty
September 8, 2017


Researchers at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory have developed a new general-purpose technique for making sense of neural networks that perform natural language-processing tasks. The researchers train a network to compress and decompress natural sentences, generating an intermediate digital representation that is re-expanded into its original form. The encoder and decoder are evaluated concurrently based on the fidelity of the decoder's output to the encoder's input. The researchers say the network naturally employs the co-occurrence of words to boost decoding accuracy, and its output probabilities define a cluster of semantically related sentences. The system can produce a list of closely related sentences, which the researchers feed to a black-box natural-language processor, yielding a long list of input-output pairs that algorithms can analyze to ascertain which changes to which inputs cause which changes to which outputs.

Full Article
Julia Joins Petaflop Club
HPCwire
September 12, 2017


The open source Julia programming language has been admitted into the "Petaflop Club" by virtue of its Celeste application's peak performance topping one petaflop per second. Developed by researchers at the University of California, Berkeley and their collaborators, Celeste processed the entire Sloan Digital Sky Survey astronomical image dataset via a new parallel computing technique. The Celeste team loaded an accumulated 178 terabytes of image data to generate the most precise catalog of 188 million astronomical objects in 14.6 minutes with state-of-the-art point and uncertainty estimates. Celeste realized a peak performance of 1.54 petaflops using 1.3 million threads on 9,300 Knights Landing nodes of the National Energy Research Scientific Computing Center's Cori supercomputer, representing a 1,000-fold performance improvement in single-threaded execution. Celeste's developers also are working to boost the precision of point and uncertainty estimates and to enhance the quality of native code for high-performance computing.

Full Article

Schematic of a quantum network High-Speed Quantum Memory for Photons
University of Basel
September 7, 2017


Researchers at the University of Basel in Switzerland have developed a memory technology that can store photons in a rubidium atomic vapor and read them out again later without significantly altering their quantum mechanical properties. The researchers say this new memory technology is simple and fast and could be used in a future quantum Internet. A laser controls the storage and retrieval processes, and the system does not require cooling devices or complicated vacuum equipment, which enables it to be implemented in a highly compact setup. In addition, the researchers say they were able to verify the memory has an extremely low noise level and is suitable for single photons. "The combination of a simple setup, high bandwidth, and low noise level is very promising for future application in quantum networks," says Basel researcher Janik Wolters. The development of such networks is a goal of the National Center of Competence in Quantum Science and Technology.

Full Article
Algorithm Reconstructs Processes From Individual Images
Phys.org
September 7, 2017


Researchers at the Helmholtz Zentrum Munchen's Institute of Computational Biology (ICB) in Germany have developed an algorithmic technique for reconstructing continuous biological processes from individual image data. "In the current study, we dealt with the problem that software cannot assign image data to continuous processes," says ICB's Alexander Wolf. Former ICB researchers Philipp Eulenberg and Niklas Kohler note their team used deep-learning artificial neural networks to integrate individual pictures into processes and display them in a way comprehensible to humans. In one demonstration, the algorithm reconstructed the continuous cell cycle of white blood cells using images from an imaging flow cytometer. "A further advantage of this examination is that our software is so fast that it is possible to extract the cell development on the fly, meaning while the analysis in the cytometer is still running," Wolf says. "In addition, our software makes six times less errors than previous approaches."

Full Article
We're About to Cross the 'Quantum Supremacy' Limit in Computing
Futurism
Mike McRae
September 10, 2017


Researchers at Harvard University and Google are racing to achieve quantum supremacy by building a quantum computer that harnesses about 50 quantum bits (qubits) to perform complex calculations that rival those conducted by the top classical supercomputers. Harvard engineers developed a 51-qubit device from an array of super-cooled rubidium atoms held in a corral of magnets and laser "tweezers" that were then excited so their quantum states could be employed as a single system. The team was able to model significantly complex quantum mechanics with this setup. Google's project for a 49-qubit device relies on multiple-qubit quantum chips that use a solid-state superconducting structure known as a Josephson junction. The Google researchers have demonstrated the method with a 9-qubit version, and plan to gradually ramp up to their goal. Making a quantum computer maximally reliable and error-proof is a key challenge, as is connecting several units together into ever-larger processors.

Full Article

A book of sheet music Brain Composer: 'Thinking' Melodies Onto a Musical Score
Graz University of Technology (Austria)
Susanne Eigner
September 6, 2017


Researchers at the Graz University of Technology (TU Graz) in Austria have developed P300, a brain-computer interface (BCI) application that enables music to be composed by thought. The team used an established BCI method for writing, by which music can be composed and transferred onto a music score. The system requires a cap that measures brain waves, the adapted BCI, software for composing music, and some musical knowledge. The system has various options, such as letters or notes, pauses, and chords, which flash one by one in a table. The user focuses on the desired option while it illuminates, causing a small change in brain waves. The BCI recognizes this change and draws conclusions about the chosen option. The researchers tested the P300 with volunteers possessing basic musical and compositional knowledge. "After a short training session, all of them could start composing and seeing their melodies on the score and then play them," says TU Graz's Gernot Muller-Putz.

Full Article
Voting-Roll Vulnerability
Harvard Gazette
Peter Reuell
September 6, 2017


It is relatively easy and inexpensive for hackers to purchase sufficient personal information, via both legal and illegal means, to potentially rig online voter registration information in as many as 35 states and Washington, D.C., according to a new study from Harvard University. The study authors say "voter identity theft" could be perpetrated by attackers attempting to disenfranchise voters where registration information can be altered online. The researchers note datasets of voter names and demographic information such as addresses and party affiliations can be bought or downloaded, often from government websites, at reasonable prices. Commercial data brokers on the dark Web also can sell more personal information to hackers at low costs. "If the goal is to undermine any belief in the electoral system, then [attackers] might very well want to target a particular community at large...[because] that could cause a kind of hysteria," warns Harvard professor Latanya Sweeney.

Full Article
Bristol Professor Wants to Take UK Robotics to the Next Level
The Engineer (United Kingdom)
Jon Excell
September 12, 2017


In an interview, University of Bristol professor Chris Melhuish discusses his plan to incubate more advanced U.K. robotics as director of the Bristol Robotics Laboratory (BRL). "We tend to think of [robots] as aluminum and plastic, but they could well be biological," Melhiush says. He notes BRL's commitment to using robotics as enhancements for assisted-living technology, which are undergoing development and testing at the Anchor Robotics Personalized Assisted Living Studio. Soft robotics projects also are being pursued at BRL, with Melhuish saying, "the potential is that you can conflate sensing and actuation--and even carry out processing and communications in the materials themselves." Melhuish also notes much of the lab's research crosses disciplines and themes, with a major concentration on connectivity, emphasizing driverless cars as well as swarming robotics designed to emulate decentralized intelligence. Melhuish cites artificial intelligence to facilitate meaningful robot-human communication as a particularly vital area of cross-disciplinary research.

Full Article
ACM Career & Job Center
 
ACM Discounts
 

Association for Computing Machinery

2 Penn Plaza, Suite 701
New York, NY 10121-0701
1-800-342-6626
(U.S./Canada)



ACM Media Sales

If you are interested in advertising in ACM TechNews or other ACM publications, please contact ACM Media Sales or (212) 626-0686, or visit ACM Media for more information.

To submit feedback about ACM TechNews, contact: [email protected]