Welcome to the December 9, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.
Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
NASA, Google Unveil a Quantum Computing Leap
IDG News Service (12/08/15) Martyn Williams
Engineers at the U.S. National Aeronautics and Space Administration (NASA) and Google on Tuesday announced a breakthrough test using NASA's D-Wave 2X quantum computer to solve an optimization problem 100 million times faster than a conventional computer with a single core processor. Such optimization tasks are common in both space missions and air traffic control modeling, and the D-Wave's challenge had almost 1,000 variables to factor in. Google's Hartmut Neven reports the D-Wave accomplished in one second what a conventional single-core system would spend 10,000 years doing. "NASA has a wide variety of applications that cannot be optimally solved on traditional supercomputers in a realistic time frame due to their exponential complexity, so systems that use quantum effects...provide an opportunity to solve such problems," says NASA's Rupak Biswas. Although the experiment's result is a major boost for D-Wave Systems and its technology, the milestone comes with some caveats. The D-Wave machine was specifically engineered for the optimization problem it conducted, and it is based on 1,097 quantum bits (qubits), up from the first-generation system's 512 qubits.
As Aging Population Grows, So Do Robotic Health Aides
The New York Times (12/04/15) John Markoff
Roboticists and doctors anticipate the coming years will see the deployment of new innovations in computerized, robotic, and Internet-connected technologies to help aging adults stay independent longer. For example, the U.S. National Science Foundation is funding work by University of Illinois roboticist Naira Hovakimyan to develop drones capable of performing simple household chores as assistive devices for seniors. Wearable devices, smart walkers, room and home sensors, and virtual and robotic companions are some of the advances derived from robotics and artificial intelligence expected to become commercially available in the next decade. Another example from former Microsoft executive Tandy Trower is a rolling robot designed to monitor its human companion's health and assist with tasks such as keeping track of medication. Japan and Europe appear to be ahead of the U.S. in their development of technological aides for an expanding senior populace, partly because their governments are "more attuned to the potential of technology" for this demographic, says Oregon Health & Science University's Jeffrey A. Kaye. For example, Intel and China are working on a project to map behavior patterns for caregivers using machine-learning methods. Another key issue is whether such assistive technologies can help prevent or retard dementia and other physical and cognitive declines associated with aging.
Want a Computer That Never Crashes? Don't Let Bugs Freak It Out
New Scientist (12/02/15)
Developers' perception of software bugs must shift from something that must be found and removed at all costs to an unavoidable fact of life. Researchers think this could encourage the creation of computers that are more resilient to malfunctions to the point where crashes become a thing of the past. "The idea here is immortal software," says the Massachusetts Institute of Technology's Martin Rinard. More and more scientists are refocusing from bug removal to bug effect elimination, and Rinard has developed a method called failure-oblivious computing to avoid program crashes at all costs. His team's study of numerous bugs discovered programs can normally recover if they can circumvent a single obstacle. "More often than not, the program will take a little bit of a hit, right itself, and then keep going," Rinard says. Another potential solution is to introduce some randomness into coding so software will be less crash-prone. The University of Massachusetts Amherst's Emery Berger is exploring this solution via his DieHard system, which responds to previous bug encounters by randomly selecting a slightly different way of running the software that will often evade the bug.
IBM Tapped by U.S. Intelligence Agency to Grow Complex Quantum Computing Technology
Network World (12/08/15) Michael Cooeny
IBM on Tuesday was awarded a multi-year grant from the U.S. Office of the Director of National Intelligence's Intelligence Advanced Research Projects Activity (IARPA) to develop technologies that could serve as components of a universal quantum computer. The grant is part of IARPA's Logical Qubits (LogiQ) program, which seeks to overcome the limitations of quantum systems by building what it calls a logical quantum bit (qubit) out of several imperfect physical qubits. According to IARPA, the performance of current multi-qubit systems is inferior to that of individual qubits; this is in part because qubits are sensitive to system noise and environmental interference, among several other factors. IARPA hopes building a logical qubit will help quantum computers overcome these difficulties. IBM will continue its development of quantum computers focusing on superconducting qubits, which it says will be foundational to more complex quantum computing systems in the future. "Investments and collaboration by government, industry, and academia such as this IARPA program are necessary to help overcome some of the challenges towards building a universal quantum computer," says IBM Research's Arvind Krishna.
Chinese Researchers Unveil Brain Powered Car
Chinese researchers from Nankai University have spent the last two years developing a mind-controlled car. The car operates by having the driver wear brain signal-reading equipment, which can command the car to go forward, backwards, come to a stop, and both lock and unlock the vehicle, without the driver moving their hands or feet. The equipment includes 16 sensors that capture electroencephalogram signals from the driver's brain. The Nankai researchers also developed software that selects the relevant signals and translates them, enabling control of the car. "The computer processes the signals to categorize and recognize people's intention, then translates them into control commands to the car," says Nankai researcher Zhang Zhao. The technology is aimed at better serving human beings, and the researchers note it might soon be possible to combine brain-controlled technology with driverless cars. Zhang says the goal of the research was to provide a driving method without using hands or feet for the disabled who are unable to move freely, and provide healthy people with a new and more intellectualized driving mode.
Teaching Computers How to Give Cricket Commentary
NDTV (India) (12/08/15) Sriram Sharma
Indian researchers from the International Institute of Information Technology, Hyderabad (IIIT Hyderabad) have used machine-learning techniques to generate text-based cricket commentary with an accuracy rate of 90 percent. The video is segmented into "scenes," using scene category information extracted from text commentary. The system then classifies video shots and the phrases in the textual description into various categories. Finally, the relevant phrases are mapped into the video shots. The researchers say the technique could be used by sports websites to automate and assist reporters in writing real-time cricket commentary. The video dataset was collected from the Indian Premier League tournament's YouTube channel, while small samples of commentary were taken from commentary from about 300 matches on Cricinfo. The researchers note the algorithms were able to accurately label a batsman's cricketing shot by using visual-recognition techniques on an action that lasts just 1.2 seconds. Annotation of the videos enables the researchers to build a retrieval system that can search across hundreds of hours of content for specific actions that last only a few seconds; the researchers believe cricket teams could use the technology to analyze strengths and weaknesses of particular players, and it also could be applied to other sports such as soccer or tennis.
A Search Engine for the Internet's Dirty Secrets
Technology Review (12/04/15) Tom Simonite
University of Michigan (UM) researchers, with the help of Google-provided infrastructure, are developing Censys, a search engine aimed at helping security researchers find Internet secrets by tracking all of the devices connected to it. "We're trying to maintain a complete database of everything on the Internet," says UM researcher Zakir Durumeric. Censys searches data harvested by ZMap, a University of Michigan-developed program. Censys is updated every day with a fresh set of data collected after ZMap "pings" more than 4 billion of the numerical Internet Protocol addresses associated with devices connected to the Internet. The system can identify what kind of device responded, as well as details about its software, such as whether it uses encryption and how it is configured. Censys can identify new security flaws and how widespread they are, what devices are vulnerable to them, who they are operated by, and their approximate location. Although commercial search engines are trying to compete with Censys, Durumeric says head-to-head tests show Censys offers significantly better coverage of the Internet and fresher data, making it better suited for measuring and responding to new problems.
Teaching AI to Play Atari Will Help Robots Make Sense of Our World
Wired (12/02/15) Cade Metz
Google's DeepMind subsidiary is training artificial intelligence (AI) programs to play classic Atari video games, and some of them are proficient enough to beat people. The hope is this and other projects will pave the way for AIs that can navigate the real world. For example, the Osaro startup has built an AI engine that can play classic games, with the goal of offering the technology to drive the next generation of warehouse and factory robots. Osaro CEO Itamar Arel says the AI engine improves with practice via a combination of recurrent neural networks and reinforcement learning algorithms, which apply trial-and-error processes to the task at hand. Once a neural net understands the state of a video game, reinforcement learning can use this information to help a machine decide its next move. Similarly, after a neural net delivers a "picture" of the world around a robot, reinforcement algorithms can help it execute a specific task in that environment. Arel says recurrent neural nets display a form of short-term memory, and better understand a game's state based on how it looked in the recent past. The reinforcement algorithms then come into play, acting on what the neural nets perceive.
What Makes Tom Hanks Look Like Tom Hanks?
University of Washington News and Information (12/07/15) Jennifer Langston
Researchers at the University of Washington (UW) have developed a system that uses machine-learning algorithms to construct and animate three-dimensional (3D) models of celebrities based solely on large numbers of photographs. Current 3D modeling technology, such as that used by major film studios, requires a painstaking process to capture every angle of a person's face and movements. The system designed by the UW researchers is meant to be much simpler. It combines several techniques developed over the last five years by a research group led by UW professor Ira Kemelmacher. The techniques include advances in 3D face reconstruction, tracking, alignment, multi-texture modeling, and puppeteering. The system works by feeding at least 200 images of an individual, taken over time in various scenarios and poses, to machine-learning algorithms. The raw data is then used to create the 3D model, which can then be animated so it mimics a given person's distinctive traits. The researchers say the technology could some day be used to create 3D models of absent loved ones or celebrities that people could interact with as though they were real.
Virtual-Reality Lab Explores New Kinds of Immersive Learning
The Chronicle of Higher Education (12/08/15) Ellen Wexler
The University of Maryland (UMD) is fast becoming a hub for virtual reality (VR) research. The university's new VR lab, called the Augmentarium, was built last year and is funded in part by a $31-million donation by Brendan Iribe, a co-founder of VR company Oculus VR. Education is one area researchers at the new lab are studying. The research of Amitabh Varshney, director of the university's Institute for Advanced Computer Studies, indicates the human brain processes information differently based on the environment it is in. One of the goals of the lab's learning projects is to create techniques and technologies that make learning more immersive. UMD professor Ramani Duraiswami is researching ways to use sound to make distance learning more immersive by giving remote students the sense they are hearing online lectures in person. Other projects are focused on using VR in the classroom. One example is using VR to give medical students a better view of surgical demonstrations, while another is enabling architects to experience their designs in an immersive three-dimensional environment. Varshney says it may take some time before VR technology is mature enough to handle such practical applications, but he notes the technology is rapidly advancing.
Algorithm Helps Analyze Neuron Images
News from Brown (12/04/15) Kevin Stacey
Brown University researchers have developed the Neuron Image Analyzer (NIA), an approach to analyzing images of neurons taken from microscopes that picks out delicate neuron structures, helping scientists better assess the growth of cells. The NIA relies on a new algorithm that automates the process of analyzing images, and the researchers say it completes the work more accurately than previous automated approaches. The approach dispenses with the uniform filters used in other approaches, and instead analyzes how pixels are related to neighboring pixels. "We look at the relational information between pixels," says Kwang-Min Kim, a former Brown graduate student and now a postdoctoral researcher at Stanford University. "This way we can trace pixels that are connected to each other, which helps us trace the entire neuron structure." The algorithm also relies on a technique that uses a particular statistical test that is good at identifying circular or elliptical structures, and is used to accurately locate and measure the main body of a neuron. The researchers tested the NIA against existing algorithms and found it was 80 percent as accurate as hand coding, while their other algorithms were only 50 to 60 percent as accurate.
New Federal Grant Could Build Techniques That Free Big Data Analysis Systems of Bugs
UT Arlington News Center (12/04/15) Herb Booth
Software developers will be able to more efficiently test big data analysis systems thanks to the research of University of Texas at Arlington professor Jeff Lei. He is leading a three-year effort backed by the U.S. National Institute of Standards and Technology to develop new combinatorial testing techniques, which will ensure that software used to analyze big data is free of bugs. "Many data-analysis algorithms have been designed to discover useful information from large amounts of data," he says. "These algorithms must be implemented correctly in software so people can use them. Often, software engineers are hired to perform the implementation, and they can make mistakes. Our goal is to develop techniques and tools that software engineers can use to detect mistakes made during implementation and thus ensure that the software they develop will actually work." Combinatorial testing identifies the major factors that could impact the behavior of the software being tested, and then uses a systematic approach to test only a small subset of interactions between the factors. Combinatorial testing is widely used in software testing, but not for big data software, according to Lei. He says the research will enable combinatorial testing to be effectively applied to big data software, which will enable software developers to build more reliable big data software faster.
Custom AI Programs Take on Top Ranked Humans in StarCraft
IEEE Spectrum (12/01/15) Evan Ackerman
The recent annual Association for the Advancement of Artificial Intelligence Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE) held a contest that pits computer programs against each other in playing StarCraft: Brood War. This year saw 22 programs submitted, playing each other for two weeks straight on 12 virtual machines. At the contest's conclusion, three of the best AIs were matched against a Russian player. Although the outcomes showed AIs have yet to seriously threaten professional human players in real-time strategy (RTS) games, the technology is advancing. The winning AI, Tcsmoo, is designed to simultaneously attack multiple locations, and prioritize attacking workers or small groups of enemy units while also constantly rescheduling units by demand. In many cases, the AI programs entered into the competition behave in ways that appear to be nonsensical, but are following sensible rules that may be inapplicable in some of the specific scenarios in which the games evolve. RTS games are challenging to AIs due to "hidden information, vast state and action spaces, and the requirement to act quickly," according to the AIIDE website. The contest's organizers expect the open source nature of the competition and the growing skill of programs to lead to AIs that can beat amateur players within a few years.
Abstract News © Copyright 2015 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.