Association for Computing Machinery
Welcome to the October 7, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


How Artificial Intelligence Could Lead to Self-Healing Airplanes
The Washington Post (10/06/15) Dominic Basulto

Boeing and Carnegie Mellon University (CMU) have launched a new Aerospace Data Analytics Lab tasked with applying artificial intelligence (AI) and big data principles to mine insights from the vast body of data generated by the aerospace industry. "Recent advances in language technologies and machine learning give us every reason to expect that we can gain useful insights from that data," says CMU Language Technologies Institute director Jaime Carbonell. He notes one application of machine learning could yield a process in which CMU and Boeing can ascertain when planes require maintenance and fix problems before they arise. The new lab is one of several expanding CMU efforts to tap AI's potential, according to CMU computer science dean Andrew Moore. Other applications he cites include robots that clean up hazardous sites and robot arms that can pick up a cup of coffee without spilling. Moore says the Boeing project seeks to move the industry closer to self-healing aircraft, or the use of "evidence-based predictions of what may not be working right tomorrow, to enable preventive inspection or replacement before a failure, and hence to lower costs of coping with real unscheduled failures and to increase safety." Moore notes as AI advances it could provide advice to humans so they can build better models of the world based on new machine-learning algorithms.


Defining Scalable OS Requirements for Exascale and Beyond
HPC Wire (10/05/15) Robert W. Wisniewski

Robert Wisniewski, chief software architect for extreme scale computing at Intel and an ACM Distinguished Scientist, says system software for exascale systems is becoming more complex, and the compute node operating system (OS) will play a critical role in helping to realize the potential of exascale systems. Because of the different requirements of exascale systems and software, Wisniewski says, "what is needed is an approach that, while preserving the capability to support the existing interfaces (evolutionary), provides targeted and effective use of the new hardware (revolutionary) in a rapid and targeted manner (nimbleness)." According to Wisniewski, there are three classes of approaches emerging to overcome the weaknesses of existing methods of handling the compute node OS. The first is to continue to use Linux as the base OS, but in combination with containers that limit the interference between multiple applications. A second approach is using a virtualized platform on which either a new Light-Weight Kernel or a Linux kernel can run, providing high performance or the features of a more general-purpose OS. He notes this approach also could be combined with the third approach, which is to run multiple kernels simultaneously on a single node.


Predicting Change in the Alzheimer's Brain
MIT News (10/06/15) Larry Hardesty

Massachusetts Institute of Technology (MIT) researchers are developing a computer system that uses multiple types of data to help predict the effects of disease on brain anatomy. The researchers trained a machine-learning system on magnetic resonance imaging (MRI) data from patients with neurodegenerative diseases and found supplementing that training with other patient information improved the system's predictions. "We take our model, and we turn off the genetic information and the demographic and clinical information, and we see that with combined information, we can predict anatomical changes better," says MIT professor Polina Golland. The researchers used data from the Alzheimer's Disease Neuroimaging Initiative, which includes MRI scans of the same subjects taken months or years apart. Each scan is represented as a three-dimensional model consisting of millions of voxels, and the researchers produced a generic brain template by averaging the voxel values of hundreds of randomly selected MRI scans. They then characterized each scan in the training set for the machine-learning algorithm. The researchers conducted several experiments, including one in which they trained the system on scans of both healthy subjects and those displaying evidence of mild cognitive impairment. In that experiment, the researchers trained the system twice, once using just the MRI scans and the second time adding additional information to the scans; in cases where there were significant changes, the supplementary data made a significant difference.


Scientists Tap Dragonfly Vision to Build a Better Bionic Eye
The Wall Street Journal (10/05/15) Rachel Pannett

Researchers at Australia's University of Adelaide have developed an artificial intelligence system based on a dragonfly's vision to help sight-impaired people see better with a bionic prosthesis. The researchers note the technology also could be applied to robotics and driverless cars. Their latest research demonstrates mimicry of the dragonfly's eyesight by a computer program. The researchers believe emulating the insect's 360-degree field of vision and tracking ability could help visually-impaired people sense when someone unexpectedly veers into their path, for example. University of Adelaide neuroscientist Steven Wiederman notes the visual system is "particularly well-suited" to bionic eyes. In the project's current stage, the researchers are studying the motion-detecting neurons in insect optic lobes, which transmit messages between the eyes and the brain, in the hope of replicating it in a wheeled robot so the machine can predictively react to moving targets. Self-driving cars that are more responsive to moving objects so they can better avoid collisions are another area the researchers envision for the technology. They expect to need a much smaller processor for the vision device because the dragonfly-based algorithms are much more efficient than traditional engineering approaches. University of Adelaide Ph.D. student Zahra Bagheri says the program runs about 20 percent faster than cutting-edge surveillance software.


How DARPA's I2O Finds Innovation on the Edge
Federal Times (10/05/15) Aaron Boyd

In an interview, Brian Pierce, deputy director of the U.S. Defense Advanced Research Projects Agency's (DARPA) Information Innovation Office (I2O), discusses his office's mission. He says I2O's area of concentration is the software and the algorithms that run on the information ecosystem, and among its goals are enhancing human-computer/human-machine interaction, augmenting human understanding via data analytics, and using open source software more efficiently. Guaranteeing trustworthy computing and information also is on I2O's agenda, as is taking action to quickly detect, isolate, and neutralize attackers. An example of network hardening Pierce cites is the Formal Methods technique to prove a software's specifications are fulfilled, with a focus on motor and airborne vehicles' systems. In describing I2O's innovation process, Pierce says it begins with asking what kinds of major challenges are not being addressed in the commercial sector on a more incremental level. He points to the Space/Time Analysis for Cybersecurity program, which examines specific algorithmic attacks that seek to tie up computer memory. Pierce says the expectation of failures is accepted at DARPA, and argues "if it was not stretching so hard, you would not be getting the kind of breakthroughs that have been the history of the agency."


Microsoft, Tesla Say Software-Defined Batteries Could Mix and Match Power on the Fly
PC World (10/02/15) Mark Hachman

Researchers from Microsoft, Tesla, and other organizations this week will present a paper at the ACM Symposium on Operating Systems Principles in Monterey, CA, advocating for what they call software-defined batteries (SBDs). The idea behind SBDs is to combine different kinds of batteries in the same device and use software to optimize the way the device uses and charges the batteries. Most batteries in devices today are managed by a standalone charge controller, which the operating system (OS) periodically queries for updates. In an SBD, the OS directly controls the batteries, determining how they charge and how much power is drawn from them at any given time. For example, a laptop could prioritize fast charging before a flight over charging to full capacity, which would take longer. In the paper, the researchers describe mixing and matching different kinds of batteries to maximize the battery life of a wearable device. Using a two-in-one battery optimized with SBD software, for example, could improve battery life by nearly 22 percent, according to the researchers. The wearable device would switch between the two batteries depending on what it was doing at the time. The researchers say SBDs could have useful applications in a variety of devices, including drones, smart glasses, and electric vehicles.


Silicon Quantum Computers Will Become Reality, Say UNSW Engineers
Computerworld Australia (10/06/15) Hamish Barwick

University of New South Wales (UNSW) researchers say they have built a quantum logic gate in silicon, making calculations between two quantum bits of information possible. The researchers say their breakthrough ensures silicon-based quantum computers will become a reality. "This makes the building of a quantum computer much more feasible, since it is based on the same manufacturing technology as today's computer industry," says UNSW professor Andrew Dzurak. The advance represents the final physical component needed to realize the promise of silicon quantum computers. Now that all of the physical elements of a silicon-based quantum computer have been successfully constructed, engineers can begin designing and building a functional quantum computer. The researchers say the breakthrough involved reconfiguring the silicon transistors that are used to define the bits in existing silicon chips, and turning them into quantum bits. "We then store the binary code of 0 or 1 on the 'spin' of the electron, which is associated with the electron's tiny magnetic field," says UNSW researcher Menno Veldhorst. The researchers say the next step in their project is to identify the right industry partners to work with to manufacture the full-scale quantum processor chip.


Robot See, Robot Do: How Robots Can Learn New Tasks by Observing
Technology Review (10/02/15) Will Knight

The University of Maryland's (UMD) Autonomy, Robotics, and Cognition Lab is developing robots that can learn how to do a new job by watching others do it first. For example, the UMD researchers have developed a cocktail-making robot that watches a human mix a drink by pouring liquid from several bottles into a jug. The robot copies these actions, grasping bottles in the correct order before pouring the proper quantities into the jug. The new approach involves training a computer system to associate specific robot actions with video footage showing humans performing various tasks. The researchers also developed a robot that can learn how to pick up different objects using two different systems by watching thousands of instructional YouTube videos. One of the systems learns to recognize different objects, while the other identifies different types of grasps. The learning system used for the grasping work relies on advanced artificial neural networks. The researchers note this technique is more efficient than programming a robot to handle countless different items, and it can enable a robot to deal with new objects on its own without human intervention.


Cardiff University Develops Virtual Assistant Dubbed Sherlock
BBC News (10/02/15) Jane Wakefield

Simple Human Experiment Regarding Locally Observed Collective Knowledge (Sherlock), a virtual assistant developed at Cardiff University, recently had its first public trial at the BBC's Make It Digital event. The researchers say Sherlock communicates in a human-like way, using controlled natural-language technology developed by IBM. The tool answers questions as well as asking them to build up its knowledge base. The question-and-answer process helps ensure the software and user "understand each other," says project leader and Cardiff professor Alun Preece. Sherlock acted as a quizmaster, answering questions about BBC television shows, during the event. Although it is still just a research project, Sherlock has been tested in multiple scenarios, including as a tool for emergency services and as an information app at a festival. Preece says the technology also could serve as a smart home assistant. "In a home that has a smart thermostat and devices that can detect if a window is open, a user might say to Sherlock 'I'm cold' and it would offer alternatives, such as 'I can close the window or turn the heating up'," he notes.


Novel Nanostructures Could Usher in Touchless Displays
IEEE Spectrum (10/02/15) Dexter Johnson

The swipe--without actually needing to touch a screen with a finger--will be the next dominant computer interface method, according to researchers in Germany. A team from Stuttgart's Max Planck Institute for Solid State Research and the Ludwig Maximilian University of Munich has developed nanostructures capable of changing their electrical and optical properties when a finger passes by them. The technology could lead to a new generation of touchless displays. The researchers have essentially developed a humidity sensor that reacts to the minute amount of sweat on a finger and converts it to an electrical signal or a change in color of the nanostructured material. Phosphatoantimonic acid enables the material to absorb water molecules and swell in the process, while its electrical conductivity increases. "Because these sensors react in a very local manner to any increase in moisture, it is quite conceivable that this sort of material with moisture-dependent properties could also be used for touchless displays and monitors," says Max Planck Institute doctoral student Pirmin Ganter. The real merit of the technology is its response to near-miss finger swipes in mere milliseconds, compared to seconds for previous touchless interfaces. The technology also could have fewer issues with mechanical wear over time.


New Supercomputer Software Takes Us One Giant Step Closer to Simulating the Human Brain
Science & Technology Facilities Council (10/01/15) Wendy Ellison

Computational scientists are testing new supercomputing software at the U.K. Science and Technology Facilities Council's Daresbury Laboratory in Cheshire. The researchers say the software will be key for the next generation of exascale-class supercomputers, which could exist within the next five years. Exascale supercomputers will be 1,000 times more powerful than the fastest supercomputer in operation today. A team from Queen's University Belfast and the University of Manchester created the software, which would enable future supercomputers to process masses of data at higher speeds than ever before and allow researchers to model and simulate the human brain. "Software that exploits the capability of exascale systems means that complex computing simulations, which would take thousands of years on a desktop computer, will be completed in a matter of minutes," says Queen's University professor Dimitrios Nikolopoulos. "This research has the potential to give us insights into how to combat some of the biggest issues facing humanity at the moment." The new software also will help make exascale supercomputers energy-efficient. The team developed the software as part of a project funded by the Engineering & Physical Sciences Research Council.


New 'Performance Cloning' Techniques Designed to Boost Computer Chip Memory Systems Design
NCSU News (09/30/15) Matt Shipman

North Carolina State University (NCSU) researchers have developed software using new techniques that rely on "performance cloning" to help computer chip designers improve memory systems. Performance cloning can assess the behavior of software without compromising privileged data or proprietary computer code. The researchers say chip manufacturers could use performance cloning to give profiler software to a client, who would then use the profiler to assess its proprietary software and generate a statistical report on the proprietary software's performance. The report is then sent back to the chip manufacturer without compromising the client's data or code. The profiler report is fed into generator software, which can develop a synthetic program that mimics the performance characteristics of the client's software, and can serve as the basis for designing chips that will better meet the client's needs. The first NCSU technique, called Memory EMulation using Stochastic Traces (MEMST), assesses memory in a synthetic program by focusing on the amount of memory a program uses, the location of the data being retrieved, and the pattern of retrieval. The second technique, called MeToo, focuses on how often the program retrieves data and whether the program has periods in which it makes many memory requests in a short time.


Raising Computers to Be Good Scientists
UA News (AZ) (09/29/15) Emily Litvack

With funding from the U.S. Defense Advanced Research Projects Agency, University of Arizona (UA) professor Clayton Morrison's Reading and Assembling Contextual and Holistic Mechanisms From Text (REACH) project is developing a computer that reads scientific papers, derives data on biochemical pathways, and plugs it into large-scale, interactive models. Morrison says the result will be a platform for interactive software that would enable drug developers and perhaps doctors to supply information to help model a specific therapy's interaction with a patient. "They'll be the Microsofts and Googles of biomedicine," Morrison says. He notes REACH focuses on understanding data through the processes of extraction, assembly, and inference, and the first process went through its paces this summer. UA professor Mihai Surdeanu trained a computer system to read papers by employing hundreds of algorithms, and the system could process 1,000 papers on RAS-related cancers in hours. Morrison is now embedding context within the system by teaching it species differentiation. "I think that collaborative computers are going to be like children, and we'll have to raise them, in a way," Morrison says. "They'll be as smart as we're able to teach them, and we need them to be able to communicate with us."


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe