Association for Computing Machinery
Welcome to the September 30, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).


Quantum Computing Breakthrough: Israeli Scientists Invent Cannon for Entangled Photon Clusters
International Business Times (09/28/16) Mary-Ann Russon

Scientists from the Technion-Israel Institute of Technology have created a device that can produce large clusters of entangled photons on demand and repeatedly generate the same result indefinitely. Terry Rudolph, a professor at the U.K.'s Imperial College London, and Technion professor Netanel Lindner in 2009 first suggested the conceptual idea for creating a kind of cannon or machine gun to ensure on-demand production of entangled photons. In 2014, researchers from Germany's University of Stuttgart, using a "quantum dot," showed it was possible to obtain entangled pairs on demand. The Technion team built on this work to show that it is possible to create an entire cluster of entangled photons on demand using the quantum dot, while guaranteeing each cluster will always contain a predicted number of entangled photon pairs. "This has never been demonstrated before, being able to achieve an infinite amount of entangled photons," says Technion professor David Gershoni. He notes using particles of light is a better way to build a quantum computer than using ions and atoms or superconducting circuits that act as quantum bits.

Building a Sustainable, Intelligent, and Power-Efficient Cloud
CORDIS News (09/28/16)

The European Union-funded CLOUDLIGHTNING project seeks to integrate high-performance computing (HPC) with hardware and servers to construct a more efficient, heterogeneous, and user-centric cloud. CLOUDLIGHTNING estimates it can boost cloud computing capacity from 20 percent to 80 percent, and make the cloud a viable tool for HPC by overcoming an inability to configure available resources for the novel needs of the technical and scientific community. A recent CLOUDLIGHTNING report noted a general lack of trust in the cloud by HPC users due to this inflexibility, with particular concerns about data management and a dearth of cloud infrastructure. The project plans to address these and other obstacles by developing a power-efficient infrastructure to streamline access to cloud resources. CLOUDLIGHTNING's proposal involves creating a heterogeneous cloud system merging HPC power with the energy-efficient use of different types of hardware and servers that can interoperate. The researchers note CLOUDLIGHTNING entails a unique cloud management and delivery architecture based on the precepts of self-organization and self-management, which shifts deployment and optimization from the consumer to the software stack operating on the cloud infrastructure. The next phase of the project will be the rollout of a testbed to run the CLOUDLIGHTNING software stack.

First Quantum Photonic Circuit With an Electrically Driven Light Source
Karlsruhe Institute of Technology (09/27/16) Monika Landgraf

Researchers from the Westphalian Wilhelm University of Munster and the Karlsruhe Institute of Technology (KIT) in Germany have installed a complete quantum optical structure on a chip. The team achieved the breakthrough by using special nanotubes made of carbon, which emit single photons when excited by laser light, which the researchers note would be an attractive ultracompact light source for optical quantum computers. The researchers used carbon nanotubes as single-photon sources, superconducting nanowires as receivers, and nanophotonic waveguides. They then connected one single-photon source and two detectors with one waveguide. The team cooled the structure with liquid helium to enable single quanta to be counted. "As we were able to show that single photons can be emitted also by electric excitation of the carbon nanotubes, we have overcome a limiting factor so far preventing potential applicability," says KIT professor Ralph Krupke.

VTT's Encryption Method Takes Authentication to a New Level--Improves Privacy Protection
VTT Technical Research Center (09/29/16) Halunen Kimmo

Researchers at the VTT Technical Research Center in Finland say they have developed new encryption methods, that will improve consumer privacy protection and enable safer, more reliable, and easier-to-use authentication tools than current systems allow. The researchers say the new method combines safety, usability, and privacy protection, which previous methods had difficulty implementing at the same time. "Our method protects, for example, the user's biometric data or typing style," says VTT senior scientist Kimmo Halunen. He notes VTT's method stores data in a database in an encrypted form and all comparisons between measuring results and the database are conducted using encrypted messages, which means there is no need to open biometric data at this stage. The researchers integrated new kinds of encrypted methods, such as homomorphic cryptography and secure exchange of cryptographic keys, to known measuring methods of typing styles. They note existing authentication methods that rely on passwords create challenges for new types of user environments, such as smart devices, cars, and home appliances. The VTT researchers say the new methods could be available to consumers within two years.

Amazon, Google, Facebook, IBM, and Microsoft Form AI Nonprofit
ZDNet (09/29/16) Tas Bindi

Amazon, Facebook, Google, IBM, and Microsoft have announced the formation of a nonprofit to boost public understanding of artificial intelligence (AI), highlight areas in which AI can be used to solve real-world problems, and develop best practices. Members of the Partnership on Artificial Intelligence to Benefit People and Society (Partnership on AI) will perform and publish research under an open source license in areas such as ethics, fairness, and inclusivity, as well as transparency, privacy, interoperability, trust, reliability, and technological robustness. "As researchers in industry, we take very seriously the trust people have in us to ensure advances are made with utmost consideration for human values," says Facebook AI research director Yann LeCun. Meanwhile, Amazon's Ralf Herbrich says AI and machine learning have entered a "golden age," and the new nonprofit ensures inclusion of "the best and the brightest in this space in the conversation to improve customer trust and benefit society." The partnership dovetails with efforts by each member company to advance AI, such as Google DeepMind's WaveNet neural network experiments to make machines talk like humans, and Facebook research to augment user experience with AI. The Partnership on AI promises to equally represent corporate and non-corporate members on its board.

IARPA to Develop Early-Warning System for Cyberattacks
The Wall Street Journal (09/28/16) Steve Rosenbush

The U.S. Intelligence Advanced Research Projects Agency (IARPA) has launched Cyberattack Automated Unconventional Sensor Environment (CAUSE), a multi-year research and development initiative to create new technologies that could provide an early warning system for cyberattacks. IARPA notes CAUSE could help organizations move past the reactive approach to defending against and responding to cyberattacks. The agency says the three-and-a-half year program will develop software to sense unconventional indicators of a cyberattack, and will use the data to develop models and machine-learning systems that can create probabilistic warnings. CAUSE includes four main research partners--BAE Systems, Charles River Analytics, Leidos, and the University of Southern California. Each partner has a novel approach to addressing the challenge, says IARPA program manager Robert Rahmer. He notes the researchers apply human behavioral, cyberattack, and social theories to publicly available information with the goal of developing unconventional sensors of activities that indicate the early stages of an attack. "Signals of interest are derived from examining emotional language and sentiment-related characteristics, analyzing topics of discussion, and looking at technical communications," BAE says. "This differs from traditional cyberattack detection, which utilizes conventional sensors running with private data where the focus is on the detection of an ongoing event, rather than prediction."

How Data Can Help Change the World
MIT News (09/26/16) Abby Abazorius

The vast corpus of data generated across society on a daily basis was promoted as a major transformative force by speakers at last week's Institute for Data, Systems, and Society (IDSS) celebration. Massachusetts Institute of Technology (MIT) professor Sarah Williams noted, "big data will not change the world unless it's collected and synthesized into tools that have a public benefit." IDSS director Munther Dahleh said institute researchers are concentrating on meeting challenges with "an analytical, data-driven approach," and are developing models to extract insights, policies, and decisions. The conference featured a panel discussion on how data is influencing voting and elections, with MIT professor Charles Stewart saying those who control such data are playing an increasingly vital role. Meanwhile, MIT professor Alberto Abadie and University of Rome Tor Vergata professor Enrico Giovanni discussed the use of data to drive policy, with the latter urging attendees to wield data as a policy influencer to better people's well-being and push for sustainable development. Giovanni also warned against relying on data excessively. Greater transparency in data use for urban planning also was urged, with Williams moderating a panel that discussed applying data to relieve problems such as urban congestion and noise.

Deep Learning Boosts Google Translate Tool
Nature (09/27/16) Davide Castelvecchi

Google on Tuesday announced its Google Translate service will start using a new deep learning-based algorithm that can cut errors by approximately 60 percent. A Chinese-to-English service that employs the algorithm is currently in use in Google Translate mobile and Web-based applications, while Google says it will implement other languages in the coming months. Google researcher Quoc Le says the new Neural Machine Translation System (NMTS) facilitates language translation by a single neural network, from input to output. The algorithm learns by analyzing existing translations and then modifying the links between artificial neurons to improve that performance, but it also parses sentences into "word segments." "Somehow, in some representation inside the neural network, the segments can combine to represent meaning," Le notes. The same text-analyzing neural network then generates a translation. Google runs NMTS on chips geared specifically for machine learning. Comparisons of translations of sentences from Wikipedia and news articles by Google Translate's previous algorithms with NMTS-made translations showed in certain language-pairs, NMTS' accuracy approached that of human translators. NMTS demonstrates that "with solid engineering and a well-designed architecture, neural-machine translation can far outpace classical methods for machine translation," says University of Montreal computer scientist Yoshua Bengio.

Gov. Brown Signs Law to Plan Expansion of Computer Science Education
EdSource (09/27/16) Pat Maio

California Gov. Jerry Brown on Tuesday signed a bill into law that initiates a three-year planning process to expand computer science (CS) education for all grades in the state's public schools, beginning in kindergarten. By September 2017, state superintendent of public instruction Tom Torlakson will be required to create a 23-person advisory panel to develop a long-term plan to make CS education a top priority. The state's Instructional Quality Commission will decide by July 2019 whether to develop and recommend to the State Board of Education CS content standards for kindergarten through 12th grade. A spokesperson for the state says the law permits funding from private or public partnerships if state or federal funding is not available. Several school districts have introduced a CS curriculum without waiting for guidance from the state government. "California currently has tens of thousands of open computing jobs where salaries are significantly higher than the state average, but our education system is not aligned to meet this workforce need and economic opportunity," says Lt. Gov. Gavin Newsom. He notes only 25 percent of California's high schools offer any CS courses, and only 15 percent of the 3,525 high school graduates in 2014 who studied CS were female.

Removing Gender Bias From Algorithms
The Conversation (09/26/16) James Zou

Machine-learning systems begin as blank slates, an advantage for learning interesting patterns from the data and documents fed into them, which also carries the danger of incorporating gender stereotypes, according to Stanford University professor James Zou. He details how his group used a common type of machine-learning algorithm to produce "word embeddings." "Each English word is embedded, or assigned, to a point in space," Zou says. "Words that are semantically related are assigned to points that are close together in space. This type of embedding makes it easy for computer programs to quickly and efficiently identify word relationships." Zou says the algorithm bases its decisions on which words frequently appear in close proximity to each other, and if the source data reflects gender bias then the algorithm learns those biases as well. However, Zou's team has developed a debiasing system that uses people to identify examples of the types of connections that are appropriate and inappropriate, and applies these distinctions to measure the degree to which gender is a factor in those word choices. The algorithm is then instructed to eliminate the gender factor from the connections in the embedding so it no longer exhibits blatant gender stereotypes.

Hacking, Cryptography, and the Countdown to Quantum Computing
The New Yorker (09/26/16) Alex Hutchinson

As scientists get closer to creating the world's first fully functioning quantum computer, experts worldwide are deeply concerned about developing "quantum-safe cryptography." The unique property of superposition inherent in quantum computers means they can factor very large numbers very quickly, easily breaking even the most advanced forms of encryption currently in use. At a recent workshop in Toronto, professor Michele Mosca from the University of Waterloo's Institute for Quantum Computing speculated that day could arrive within 10 to 15 years, while the creation and deployment of quantum-proof security measures could take just as long. Microsoft Research's Brian LaMacchia expects a working quantum system by 2030. Unlike the Y2K bug, the "years to quantum" problem has failed to generate a commensurate degree of public interest or concern. This is partly because of uncertainty about the best methods for preventing a quantum hack. Mathematical and quantum-mechanical approaches to solving the problem are making progress, but most attendees at the Toronto workshop were interested in quantum-proof algorithms. Vadim Makarov, director of the University of Waterloo's Quantum Hacking Lab, stressed quantum systems cannot be tested against hackers until they are implemented in the real world.

New Take on an Ancient Method Improves Way to Find Prime Numbers
Scientific American (09/24/16) Matias Loewy

Mathematician Harald Helfgott at Germany's University of Gottingen has proposed improving an ancient method for finding prime numbers by reducing the requirement of physical space in computer memory and shortening the execution time of programs designed to make that calculation. Helfgott's proposal involves enhancing the sieve of Eratosthenes, which "grows in proportion to the size of the interval considered," notes Cornell University professor Jean Carlos Cortissoz Iriarte. "But if you look at what needs to be kept in memory for each step of the algorithm performed for large intervals [numbers], then the sieve stops being efficient." Helfgott was inspired by combined approaches to the century-old circle method to modify the sieve to function with less physical memory space. In mathematical terms, this means it is now sufficient to have the cube root of "N" instead of a space "N." "To calculate all primes up to a trillion, the modified version of the sieve requires a few million bits instead of a billion bits," Helfgott says. He acknowledges this space reduction implies "minor" sacrifices in the algorithm's theoretical speed, but thinks in certain ranges this loss could be countered or surpassed by the possibility of chiefly or solely using the cache memory.

Identifying Children and Saving Lives One Thumbprint at a Time
MSUToday (09/21/16) Jessi Adler

Michigan State University (MSU) researchers have demonstrated that digital scans of a young child's fingerprint can be correctly recognized one year later. The researchers showed they can correctly identify children six months old more than 99 percent of the time based on their two thumbprints. The same child then could be identified at each medical visit by a fingerprint scan, enabling them to get proper medical care. "Whether in a developing nation, refugee camp, homeless shelter or, heaven forbid, a kidnapping situation, a child's identity could be verified if they had their fingerprint scanned at birth and included in a registry," says MSU professor Anil Jain. A child's fingerprint also can be used for national identification, lifetime identities, and improving nutrition. The study was based on the fingerprints of 309 children between the ages of newborn and five years, collected over the course of one year. The fingerprint data was processed to show state-of-the-art fingerprint capture and recognition technology offers a viable solution for recognizing children enrolled at age six months or older. "Given these encouraging results, we plan to continue the longitudinal study by capturing fingerprints of the same subjects annually for four more years," Jain says.

Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.

To submit feedback about ACM TechNews, contact: [email protected]