Association for Computing Machinery
Welcome to the January 27, 2017 edition of ACM TechNews, providing timely information for IT professionals three times a week.

ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


China Can Help Drive Global Progress in Quantum Computing
ZDNet (01/26/17) Eileen Yu

Although China's progress in quantum computing is on par with other economic powers pursuing the same vision, local organizations' research expertise lags behind that of their western peers. Andrew Chi-Chih Yao, a professor at Tsinghua University's Institute for Interdisciplinary Information Sciences in China who received the 2000 ACM A.M. Turing Award, says China could potentially take a global innovative lead, a vital move if the country wants to sustain its economic growth. However, Yao also notes the Chinese market is missing scale and depth in corporate research. He says China's quantum computing projects are still mostly driven by the public sector, possibly because Chinese companies lack the kind of funding their western counterparts have for research and development. In addition, larger U.S. companies have compiled years of experience in fostering research talent, and they also might have stronger research capabilities than research professionals. Consequently, these organizations are better positioned to be forward thinking and identify technology drivers 20 years ahead, committing their research teams to develop capabilities in these fields. Yao says Chinese companies may lack access to the right research talent and be unable to follow through on initiatives for the future. Nevertheless, he says quantum computing will play a vital role in improving lives in the future.


Research Takes a Step Towards Photonic Quantum Network
R&D Magazine (01/25/17) Kenny Walter

Researchers at the Niels Bohr Institute in Denmark are developing advanced photonic nanostructures they say could help revolutionize quantum photonics and quantum networks. Their new photonic chip consists of a small crystal and a light source, called a quantum dot. Illuminating the quantum dot with a laser excites an electron, which then jumps from one orbit to another, emitting a single photon at a time. Photons are typically emitted in both directions in the photonic waveguide, but the custom-made chip breaks the symmetry and enables the quantum dot to emit a photon in a single direction. Unlike electrons, photons interact very little with the environment, meaning photons do not lose much energy during transit. A quantum network based on photons also would be able to encode much more information than is possible with current computer technology, and the information could not be intercepted. "The photons can be sent over long distances via optical fibers, where they whiz through the fibers with very little loss," says Peter Lodahl, head of the Quantum Photonics research group at the Niels Bohr Institute. "You could potentially build a network where the photons connect small quantum systems, which are then linked together into a quantum network--a quantum Internet."


New Project to Boost Sat Nav Positioning Accuracy Anywhere in the World
University of Nottingham (United Kingdom) (01/24/17) Emma Lowry

The University of Nottingham in the U.K. will participate in a four-year project using Global Navigation Satellite Systems (GNSS) to establish an architecture for the world's most accurate real-time positioning service. The TREASURE initiative will integrate different satellite systems into a combined operation to deliver instant, centimeter-level positioning anywhere on Earth. Mitigating atmospheric effects will be a key aspect of the project, which plans to develop new error models, positioning algorithms, and data assimilation methods to monitor, anticipate, and correct not only these effects but also signal degradation caused by man-made interference. Reliable positioning solutions will be produced via signal processing techniques to enhance the quality of the measurements. The researchers also will devise new multi-GNSS orbit and clock products for use with Galileo, the new European GNSS system. The team will concentrate on two existing GNSS methods: Precise Point Positioning (PPP) and Network Real Time Kinematic (NRTK). NRTK employs fixed reference stations running GNSS receivers at carefully surveyed reference sites to obtain accurate GNSS positioning data, and essential to it is the transmission of corrections from those sites to users. PPP uses external data via satellite clocks and orbit products, and one of TREASURE's goals is providing accurate predictions of atmospheric conditions.


Supercool Electrons
Okinawa Institute of Science and Technology (01/24/17) Sarah Wong

The Quantum Dynamics Unit at the Okinawa Institute of Science and Technology Graduate University (OIST) in Japan is researching a potential system for quantum computing, using electrons floating on liquid helium. "It is intrinsically pure and free of defects, which theoretically allows for the creation of perfectly identical qubits (quantum bits)," says Quantum Dynamics Unit director Denis Konstantinov. "Additionally, we can move electrons in this liquid helium system, which is difficult or nearly impossible in other quantum systems." The researchers built a microscopic channel device containing an electron trap to confine a crystal of a relatively small number of electrons. The crystal would then be moved across the liquid helium surface by altering electrostatic potential of an electrode, with the movement detected by measuring image charges flowing through another electrode. "We saw no difference between the movement of large electron crystals, on the scale of millions to billions of electrons, and crystals as small as a few thousands of electrons, when theoretically, differences should exist," says OIST's Alexander Badrutdinov. Konstantinov says the next challenge is to isolate a smaller electron crystal and then a single electron, creating a system that "has the potential to be a pure, scalable system with mobile qubits."


Artificial Intelligence Uncovers New Insight Into Biophysics of Cancer
Tufts Now (01/27/17) Patrick Collins

Researchers at Tufts University and the University of Maryland, Baltimore County have used a machine-learning artificial intelligence (AI) platform to predict three reagents that produced a previously unseen cancer-like phenotype in tadpoles. They employed a computer model to explain how to realize partial melanocyte conversion to a metastatic form using one or more interventions. The model predicted an exact mixture of altanserin, reserpine, and VP16-XlCreb1 would yield such results. The model anticipated the segment of tadpoles that would retain normal melanocytes within 1 percent of the in vivo outcomes while aggregating the percentage of tadpoles that exhibited partial or full conversion in vivo. The model ran 576 virtual experiments, all but one of which were unsuccessful. "Such approaches are a key step for regenerative medicine, where a major obstacle is the fact that it is usually very hard to know how to manipulate the complex networks discovered by bioinformatics and wet lab experiments in such a way as to reach a desired therapeutic outcome," says Tufts professor Michael Levin. "For almost any problem where a lot of data are available, we can use this model-discovery platform to find a model and then interrogate it to see what we have to do to achieve result X."


Efficient Time Synchronization of Sensor Networks by Means of Time Series Analysis
University of Klagenfurt (Austria) (01/24/17)

Researchers at the University of Klagenfurt in Austria have developed a new synchronization technique to address the issue of shrinking energy consumption of nodes due to their radio activity time. The researchers note one key is ensuring the method is not too greedy in its consumption of resources, which would cancel out the advantages of the synchronization. The new technique reduces the additional effort of synchronization between the oscillators of the individual sensors. Using time series analysis, the method learns the behavior of the sensor clocks and can anticipate or correct future deferrals before asynchronicities can begin to develop. "While the idea of learning behaviors to predict future corrections is not new, we have shown that the behavior models extracted from our time series analysis work very well with commonly employed wireless sensor devices," says the University of Klagenfurt's Jorge Schmidt. The researchers tested the new technique both in the lab and outdoors under varying conditions using commercially available sensor devices.


Better Wisdom From Crowds
MIT News (01/25/17) Peter Dizikes

Researchers at the Massachusetts Institute of Technology's (MIT) Sloan Neuroeconomics Lab and Princeton University say they have developed a "surprisingly popular" algorithm for better extracting correct answers from large crowds. The method involves asking people what they think the correct answer is, and then what they think popular opinion will be. The variance between the two aggregate responses signals the right answer. Using surveys covering a wide range of topics, the researchers found the algorithm lowered errors by 21.3 percent versus simple majority votes, and by 24.2 percent compared to basic confidence-weighted votes. The algorithm's "surprisingly popular" principle is derived from the knowledge of a well-informed subset of people within the larger crowd as a diagnostically powerful tool that indicates the right answer. These more heavily weighted crowd members with specialized knowledge, including both correct information and a correct sense of public perception, make a major difference, says MIT's John McCoy. The researchers note the "surprisingly popular" principle's power resides in its lack of reliance on an absolute majority of expert opinion. "The argument...in a very rough sense, is that people who expect to be in the minority deserve some extra attention," says MIT professor Drazen Prelec.


D-Wave Upgrade: How Scientists Are Using the World's Most Controversial Quantum Computer
Nature (01/24/17) Elizabeth Gibney

Many scientists are eager to use D-Wave quantum computers, despite skepticism about their ultimate potential. Researchers have demonstrated that, for an optimization operation designed to suit the machine's abilities, the D-Wave offers vastly faster processing speeds over a classical version of an algorithm. However, the machines do not beat every classical algorithm, and no one has found a problem for which they overtake all classical rivals. D-Wave's biggest machine thus far, the 2000Q, incorporates improvements to earlier iterations mainly derived from researcher feedback. D-Wave says its next upgrade should mollify critics by further boosting connectivity and capacity, with another expected doubling of quantum bits (qubits) as well as more sophisticated connections between them. Although D-Wave's qubits are easier to construct compared to more conventional quantum systems, their quantum states are more fragile, and their manipulation less refined. Nevertheless, about 100 scientists attended D-Wave's first users' conference last September in Santa Fe, NM. "Quantum computing is a new tool," says Mark Novotny at Charles University in the Czech Republic. "So, part of what we're doing is just trying to figure out how we can use it."


A Computer Program That Learns How to Save Fuel
Economist (01/28/17)

Artificial intelligence (AI) may one day be able to help cars use energy more efficiently, according to researchers at the University of California, Riverside. Cars have computerized engine-management systems that respond to changes in driving conditions, but the introduction of hybrid vehicles has complicated energy management. As hybrid cars use both a gas engine and an electric motor that is recharged by capturing kinetic energy, these vehicles need more management than a traditional gas engine. To address this problem, researchers are developing an energy-management system that relies on AI to learn from past experiences and account for expected energy consumption. The algorithm divides the trip into small segments as the journey progresses. During each segment, the system determines whether the vehicle has encountered the same driving conditions before, drawing data from traffic information, the vehicle's speed and location, time of data, and other factors. In simulations using live traffic information from southern California, researchers compared their algorithm with a basic energy-management system for plug-in hybrids that switches to combustion power when the battery is depleted. The new system was 10.7 percent more efficient than the conventional method. The next step for the team will be to work with carmakers to test their algorithm on public roadways.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Machine-Learning Boffins 'Summon Demons' in AI to Find Exploitable Bugs
The Register (UK) (01/24/17) Katyanna Quach

Researchers at the University of Maryland (UMD) are identifying and studying exploitable bugs in machine-learning implementations. Machine-learning algorithms have bugs that could affect tasks, which attackers could exploit to corrupt the machine's input to manipulate the output. For example, people could alter cost function algorithms to change the price of insurance bonuses, and criminals who know the program's source code could evade facial recognition on security cameras. To comb through machine-learning programs for bugs, the UMD researchers used a semi-automated technique known as steered fuzzing, which mutates the input. They then applied American Fuzzy Lop, a software testing technique, to make the mutated inputs crash the system, enabling the researchers to find where the system was manipulated. Within three programs, researchers found seven exploitable bugs, three of which have been filed as new additions in the Common Vulnerabilities and Exposures dictionary. "Once you start looking for bugs, you find more and more of them," says UMD professor Tudor Dumitras. "And we didn't have to look very hard. The community [of software developers] isn't really aware of this problem--it doesn't seem to be a high priority."


Deep Learning Algorithm Does as Well as Dermatologists in Identifying Skin Cancer
Stanford News (01/25/17) Taylor Kubota

Researchers at Stanford University say they have created a deep-learning algorithm that can diagnose skin cancer as accurately as board-certified dermatologists. To teach their algorithm how to identify skin cancer, the Stanford team started with an existing deep-learning algorithm built by Google for image classification. The program was fed 130,000 images of skin lesions representing more than 2,000 different diseases, and then the researchers asked 21 dermatologists to determine whether, based on each image, they would recommend the patient proceed with biopsy and treatment. Success was evaluated by how well the dermatologists were able to correctly diagnose cancerous and non-cancerous legions in more than 370 images. The algorithm also was assessed on its ability to classify benign lesions, melanoma, and melanoma when viewed using dermoscopy (skin surface microscopy). The algorithm's performance was measured via the creation of a sensitivity-specificity curve. In all three tasks, the algorithm matched the dermatologists' performance. The researchers are encouraged by the algorithm's success and would like to make it smartphone-compatible, enabling people to potentially catch melanoma in its earliest stages without needing to see a specialist.


Making Distributed Storage Highly Consistent
Phys.org (01/25/17)

Ensuring the availability, reliability, and consistency of distributed digitally stored data is a key goal of researchers at the IMDEA Networks research institute in Spain. Because of the dissemination of data in multiple hosts, a major problem with distributed storage is maintaining the consistency of data when it is accessed simultaneously by multiple operations. The algorithms developed by the ATOMICDFS project at IMDEA offer a way to minimize such a cost, proving the practicality of consistent storage systems and proposing solutions to enable manipulation of large shared objects. A central notion of the project is "coverability," which defines the precise characteristics that files and similar version-dependent objects must have in a concurrent setting. To enhance the speed of the operations on the storage, the IMDEA team focused on improving both the communication and the computation costs imposed by each operation. The new algorithms can match the optimal communication performance while also reducing the computation cost exponentially. In addition, the researchers cut message costs by injecting two file manipulation methods, first by a simple division of the file into data blocks and then by use of a log of file operations.


Scaling HPC at the Julich Supercomputing Center
Inside HPC (01/25/17) Rich Brueckner

In an interview, Norbert Attig and Thomas Eickermann discuss the Julich Supercomputing Center (JSC) in Germany, which is part of the Gauss Center for Supercomputing. They note JSC focuses on the challenge of leveraging highly scalable systems for a variety of research communities, and unlike the other two Gauss centers, JSC has no formal connections to a university and is devoted to supercomputing. Attig and Eickermann say JSC's support environment also is unique among its peers. Researchers at the center found that allocating a single adviser to a project was not ideal when dealing with scalability, community codes, and other elements. The center's Simlab is a unit that consists of high-performance computing experts that each have knowledge of one specific discipline. By working in one particular field and collaborating with other high-performance computing experts, they are able to provide support for their specific community while conducting research. JSC started with investigations into plasma physics and has expanded to include Simlabs in fluid and solid engineering, biology, molecular systems, and quantum chromodynamics. The center has been using IBM Blue Gene technology since 2005. Although Blue Gene's processors were slower, requiring more of them to be installed on a single rack, users adapted by rewriting their code in a way that could function with less memory per application.


Abstract News © Copyright 2017 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]