Association for Computing Machinery
Welcome to the November 28, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Japan Aims for Superefficient Supercomputer by 2017
IDG News Service (11/25/16) Peter Sayer

Japan's National Institute of Advanced Industrial Science and Technology (AIST) plans to build a super-efficient supercomputer that could achieve the top ranking in the Top500 supercomputer list by the end of next year. The AI Bridging Cloud Infrastructure is intended for use by startups, existing industrial supercomputing users, and academia. The planned supercomputer would have a processing capacity of 130 petaflops and outperform the current world leader, China's Sunway TaihuLight, which delivers 93 petaflops. AIST also wants to make its new supercomputer one of the most efficient in the world, aiming for a power consumption of less than 3 megawatts. Japan's most powerful supercomputer, Oakforest-PACS, currently delivers 13.6 petaflops for the same amount of power. AIST wants its new system to have a power usage effectiveness of less than 1.1, a value attained only by the world's most efficient data centers. The AIST researchers plan to use liquid cooling to help meet their goals for the new system. Other countries have optimized their top supercomputers for calculations such as atmospheric modeling or nuclear weapon simulations, but AIST is focusing on machine-learning and deep-learning applications in artificial intelligence.


How to Protect Your Laptop--Even When It's Asleep
Concordia University (11/23/16) Clea Desjardins

Researchers at Canada's Concordia University have developed Hypnoguard, software that safeguards data even when a computer is in sleep mode. Hypnoguard encrypts the computer's random-access memory (RAM) before it enters sleep mode, and then decrypts the data upon waking with hardware-backed uncircumventible user re-authentication. "The entire process is transparent to the user, who simply enters a regular 'unlock' password when the computer wakes up," and there is almost no impact on usability, says Concordia professor Mohammad Mannan. The researchers developed the system by integrating password-based authentication with widely available hardware security features in modern consumer-grade computers. The researchers say the general public as well as corporate and governmental users should soon be able to implement Hypnoguard to protect critical data. "If Hypnoguard is combined with Gracewipe [another proposed safeguard from Mannan and postdoctoral researcher Lianying Zhao], both RAM and disk data will be safe against password guessing and coercion attacks," the researchers say. The team presented its research last month at the ACM Conference on Computer and Communications Security (CCS 2016) in Vienna, Austria.


U.K. Scientists Develop Technique to Greatly Simplify Trapped Ions
International Business Times (11/25/16) Mary-Ann Russon

Researchers from the U.K.'s University of Sussex have developed a new technique that makes it easier to build large-scale trapped ion quantum computers, taking a major step toward making quantum computers a reality in the near future. Current methods for developing a quantum computer that make use of trapped ions involve using laser beams to build quantum gates. Although that strategy will work if the goal is building a small quantum computer that consists of only a few quantum bits (qubits), quantum computers capable of computing huge equations need billions of qubits, which can only be achieved with a system of billions of laser beams all accurately aligned to build all the requisite quantum gates. The Sussex researchers say they developed a much simpler method in which voltages are applied to a microchip in order to build quantum gates. "We use microwave radiation, bathe the entire quantum computer in microwaves, then we have local magnetic field gradients within the actual processing zones, and by applying a voltage, we shift the position of the ion so it either interacts with the global microwaves or not," says Sussex professor Winfried Hensinger.


Universities' AI Talent Poached by Tech Giants
The Wall Street Journal (11/24/16) Daniela Hernandez; Rachael King; Deepa Seetharaman

Leading technology companies are luring artificial intelligence (AI) scientists from academia, which could cause a shortage of educators to train next-generation researchers to tackle pressing challenges. "I am concerned that it will slow down our [rate of] discovery in the university and research labs because some of the best and brightest won't be here," says the National Center for Atmospheric Research's Sue Haupt. The U.S. National Science Foundation estimates 57 percent of newly-graduated U.S. computer science postdoctoral researchers are taking industry jobs, up from 38 percent 10 years ago. "People are starting to question whether we are, in some sense, jeopardizing our ability to meet industry demand in the future," says Mark Riedl, director of the Georgia Institute of Technology's Entertainment Intelligence Lab. The University of Montreal's Yoshua Bengio says the field of deep learning has been significantly hit by industry defections, while Carnegie Mellon University's Andrew Moore notes AI students are "worth somewhere between $5 million and $10 million to a company's bottom line." Tech companies are enticing students with benefits such as steady funding, massive datasets, and higher salaries. AI experts at certain tech firms are trying to cushion academic attrition by financing university departments and training students.
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 


When AI Matures, It May Call Jurgen Schmidhuber 'Dad'
The New York Times (11/27/16) John Markoff

Jurgen Schmidhuber, co-director of Switzerland's Dalle Molle Institute for Artificial Intelligence Research, feels his and others' pioneering work in the field of artificial intelligence (AI) is often overlooked or ignored. At the core of Schmidhuber's argument is early research into neural networks, whose myriad applications include speech and language recognition, driverless car navigation, and dexterous robotic hands. Schmidhuber says his and Sepp Hochreiter's 1997 paper on Long Short-Term Memory laid the groundwork for recent innovations in AI vision and speech enabled by the addition of memory or context to neural networks, dramatically boosting recognition accuracy. Other researchers say Schmidhuber, in claiming credit for certain concepts, is discounting work performed by many more contributors. "He wasn't the one who made [AI] popular," says AI scientist Gary Bradski. "It's kind of like the Vikings discovering America; Columbus made it real." Schmidhuber also has attracted controversy with his conviction that self-aware AI will soon arrive, facilitated by more powerful computers and software algorithms that follow designs similar to his. "To build something smarter than myself...will build something even smarter...and eventually colonize and transform the universe, and make it intelligent," Schmidhuber says.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Google's DeepMind AI Can Lip-Read TV Shows Better Than a Pro
New Scientist (11/21/16) Hal Hodson

Researchers at Google's DeepMind and the University of Oxford are applying deep-learning techniques to a massive dataset of BBC TV programs to create a lip-reading system that can perform better than professional lip readers. The artificial intelligence (AI) system was trained using 5,000 hours from six TV programs that aired between January 2010 and December 2015. The TV clips' audio and video streams were sometimes out of sync, so a computer system was taught the correct links between sounds and mouth shapes to prepare the dataset for the study. Using this information, the system determined how much the streams were out of sync and realigned them. The AI's lip-reading performance was then tested on TV programs broadcast between March and September 2016, accurately deciphering 46.8 percent of all words without any errors. In comparison, a professional lip reader deciphered just 12.4 percent of words correctly in a dataset of 200 clips. Many of the AI's errors were small, such as missing an "s" at the end of the word. Researchers believe automatic lip readers could have significant practical potential, with applications ranging from improved hearing aids to speech recognition in loud environments.


Internet of Things Will Demand a Step-Change in Search Solutions
University of Surrey (11/24/16) Ashley Lovell

New Internet search mechanisms will need to be developed to support the growing Internet of Things (IoT), according to researchers from the U.K.'s University of Surrey and Wright State University in the U.S. Current search engines enable users to search for information online, but future technologies will require machine-to-machine searches that are generated depending on location, preferences, and local information. The researchers argue existing search engines will not be able to index and find the type of data that IoT devices will need to gather. They say applications relying on public data, such as smart city technologies, will need to be accessible for a wide range of services, and the search mechanisms for these devices must provide an efficient means of indexing information while keeping the data safe from hackers. Surrey's 5G Innovation Center is working to develop search mechanisms and algorithms to better sort and analyze data. "IoT technologies such as autonomous cars, smart cities, and environmental monitoring could have a very positive impact on millions of lives," says Surrey researcher Payam Barnaghi. "Our goal is to consider the many complex requirements and develop solutions which will enable these exciting new technologies."


Researchers Build Tool to Help Prevent 'Selfie Deaths'
CBC News (Canada) (11/19/16) Jonathan Ore

A growing number of people die every year while taking photos of themselves at dangerous locations, leading researchers at Carnegie Mellon University (CMU) to look for ways to reduce this risk. The researchers studied 127 deaths reported from 2014 to September 2016 that were linked to someone taking a selfie. Most deaths involved people falling from great heights or drowning, but taking selfies near train tracks or a wild animal also were factors. The study found men were more prone to take dangerous selfies and accounted for more than 75 percent of all selfie-related casualties. The CMU researchers developed a tool that can identify whether a selfie posted on social media was taken in a potentially fatal location. The tool looks for indicators such as a steep drop in elevation, closeness to rail tracks, and the presence of guns or wild animals. Dangerous elements were identified with 73.6-percent accuracy. The researchers want to build a mobile application that can warn users in real time whether they are approaching a dangerous site, and the app could link to news reports of previous selfie-related injuries or even prevent users from launching their phone's camera app.


Will AI Usher in a New Era of Hacking?
IDG News Service (11/26/16) Michael Kan

Advanced artificial intelligence (AI) systems developed to defend assets in cyberspace might just as easily be turned to malevolent purposes by cybercriminals. "It seems like we're heading into a world of machine versus machine cyberwarfare," says Darktrace's Justin Fier. One potential hack could use AI capabilities to scan and exploit previously unknown software bugs with far more efficiency than humans. The groundwork for such exploits is being established by groups such as the U.S. Defense Advanced Research Projects Agency, whose Cyber Grand Challenge seeks to advance cyberdefense by enabling supercomputers to automatically find and correct software flaws, for example. SentinelOne CEO Tomer Weingarten thinks AI-driven technologies that trawl the Internet for vulnerabilities may be an unavoidable consequence of innovation. Weingarten envisions black-market "rent-a-hacker" services eventually adopting AI tools that can design and orchestrate cyberattack strategies and estimate the associated fee. "The human attackers can then enjoy the fruits of that labor," he warns. Would-be AI hackers might encounter significant difficulty with the high cost of using machine-learning software, although cybersecurity experts such as Cylance's Jon Miller predict the inevitable decrease of computing power costs should aid them.


Meeting of the Minds for Machine Intelligence
MIT News (11/22/16) Alison F. Takemura

The recent 2016 Machine Intelligence Summit hosted by the Massachusetts Institute of Technology (MIT) and venture capital firm Pillar convened industry leaders, computer scientists, and venture capitalists to discuss how smarter computers, specifically machine learning, are reshaping the world. The summit suggested how machine learning can be applied on a broader level than simply a commercial one. The summit's organizers predict machine intelligence will revolutionize human life, with examples including MIT professor Regina Barzilay's vision of using such technology to enhance medical decisions by both doctors and cancer patients. Healthcare was the topic that provoked the most intense discussion at the conference. Barzilay and MIT professor Tommy Jaakkola are focused on extracting a machine's reasoning so clinicians can know the rationale for computer-driven outcomes or predictions. Meanwhile, MIT professor Antonio Torralba's group is striving to give machines a sensory experience of their surroundings similar to human infants so they can learn to predict phenomena such as sounds without explicit instruction. MIT professor Stefanie Jegelka's concentration is on identifying maximally informative data so machines can learn faster and make more reliable predictions. Other experts at the summit anticipated an expansion of machine learning's reach into government policy.


New Quantum States for Better Quantum Memories
Technical University of Wien (Austria) (11/22/16) Florian Aigner

Researchers at Austria's Technical University of Wien (TU Wien), in collaboration with Japan's NTT, are moving toward new quantum memory concepts using nitrogen atoms and microwaves. Project leader Johannes Majer says the atoms are implanted within synthetic diamonds, and the coupling of microwaves to the atoms' quantum state supports "a quantum system in which we store and read information." However, the inhomogeneous lengthening of the microwave transition in the diamond's nitrogen atoms means the quantum state can no longer be reliably read out after approximately half a microsecond. Majer's team used "spectral hole burning," which enables data to be stored in the optical range of inhomogeneously broadened media, and refined it for supra-conducting quantum circuits and spin quantum memories. "The transitions areas in the nitrogen atoms have slightly different energy levels because of the local properties of the not-quite-perfect diamond crystal," says former TU Wien researcher Stefan Putz. "If you use microwaves to selectively change a few nitrogen atoms that have very specific energies, you can create a 'Spectral Hole.' The remaining nitrogen atoms can then be brought into a new quantum state, a...'dark state,' in the center of these holes." The researchers note this technique extends the quantum states' lifetime to about five microseconds.


Computer Scientists Work to Prevent Hackers From Remotely Controlling Cars
Saarland University (11/21/16)

Researchers at Saarland University's Center for IT Security and Privacy (CISPA) and the German Research Center for Artificial Intelligence (DFKI) have developed vatiCAN, software that can prevent hackers from remotely controlling cars. The software works on the car's internal network, called the CAN bus, ensuring that only a valid sender can attach the required authentication codes to its messages; this process makes certain security checks possible. For example, when the emergency braking system sends its command to the brakes, vatiCAN calculates, with the help of a secret key, an authentication code that is only valid for a single data packet and is also sent to the brakes. Meanwhile, the brakes calculate the authentication code, and compare its code with the one sent over the CAN bus. If the codes are identical, the brakes can ensure the message was not manipulated and complete the order. "The brakes know indirectly that the message could only have come from the braking assistant, because the assistant could not have calculated the correct code otherwise," says DFKI researcher Stefan Nuernberger. The researchers also combat other attacks, such as the recording and re-sending of messages, known as replay attacks, by adding a timestamp to the message.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]