Association for Computing Machinery
Welcome to the August 3, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


IBM Creates Artificial Neurons From Phase Change Memory for Cognitive Computing
Computerworld (08/03/16) Lucas Mearian

Researchers at IBM have developed artificial neurons and synapses using phase-change memory (PCM) that resembles the brain’s cognitive learning capability. The breakthrough marks the first time the researchers were able to create "randomly spiking neurons" using phase-change materials to store and process data, which could lead to energy-sipping and highly dense neuro networks for cognitive computing applications. However, it will still be several years before the market will see a PCM processing chip, according to IBM fellow Evangelos Eleftheriou. He notes a key aspect of the technology is the artificial neurons' random variation, or "stochastic" behavior. "Basically, it operates how the brain operates, with short voltage pulses coming in through synapses exciting neurons," says IBM researcher Tomas Tuma. The technology is critical to population-based computing in which every neuron responds differently and enables new ways to represent signals and compute. "Here, we have shown we have a very nice stochasticity natively because we understand the processes of crystallization and amorphization in phase-change cells," Tuma says. The artificial neurons could be used to create neurologic processors that could reside side by side and complement standard processors, offloading analytics-intensive workloads.


Hackers Hijack a Big Rig Truck's Accelerator and Brakes
Wired (08/02/16) Andy Greenberg

University of Michigan researchers at next week's Usenix Workshop on Offensive Technologies conference in Austin, TX, will demonstrate how they successfully commandeered a large industrial vehicle by transmitting digital signals within the truck's internal network. By plugging a laptop into the vehicle via its onboard diagnostic port, the researchers found they could look up commands using the J1939 open standard; this enabled them to replicate those signals on the vehicle's network without the need for reverse engineering. With the standard, the researchers were able to send commands that precisely changed the readouts of virtually any part of the instrument panel, and they were able to force the vehicle to accelerate against the driver's will. The truck-hacking project's goals were achieved in only two months, and the researchers warn committed hackers will find weaknesses offering over-the-Internet access to vehicles' digital systems. "It's pretty safe to hypothesize we're not far off from coming up with remote attacks as well," notes Michigan researcher Yelizaveta Burakova. To thwart such attacks, the researchers suggest truck manufacturers better segregate components of their vehicles' networks, or embed authentication within the network so one compromised component cannot send messages impersonating another one.


Reach in and Touch Objects in Videos With 'Interactive Dynamic Video'
MIT News (08/02/16) Adam Conner-Simons; Rachel Gordon

Researchers at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed Interactive Dynamic Video (IDV), an imaging method that can simulate the tactile sensation of objects in videos using cameras and algorithms. "This technique lets us capture the physical behavior of objects, which gives us a way to play with them in virtual space," says CSAIL postdoctoral student Abe Davis. "By making videos interactive, we can predict how objects will respond to unknown forces and explore new ways to engage with videos." The team simulated objects by analyzing video clips to find "vibration modes" at distinct frequencies to represent the different ways an object can move, enabling predictions of how they will move in new situations. "If you want to model how an object behaves and responds to different forces, we show that you can observe the object respond to existing forces and assume that it will respond in a consistent way to new ones," Davis notes. The researchers say engineering and entertainment are two areas in which IDV could find potential use. For example, the technique could enable much faster rendering of virtual objects for movies. Davis says the simulation of a real-world structure's response to wind or seismic events is another possible use.


U.S. Police Use Machine Learning to Curb Their Own Violence
New Scientist (08/01/16) Hal Hodson

Preventing stressed police officers from committing violence and misconduct is the goal of a machine-learning pilot initiative being conducted by the Charlotte-Mecklenburg Police Department in North Carolina. In collaboration with University of Chicago researchers, data from individual officers' records is fed into a machine-learning system, which learns to identify risk factors for unprofessional conduct so the department can intervene before adverse incidents occur. The system retrospectively spotted 48 out of 83 such incidents between 2005 and today, beating the accuracy of the department's existing early intervention system by 12 percent. Meanwhile, the false positive rate was 32-percent lower than the existing system's rate. Chicago team leader Rayid Ghani says it is vital the interventions chosen for high-risk officers are vetted and implemented by people and not computers. "As long as there is a human in the middle starting a conversation with [at-risk officers], we're reducing the chance for things to go wrong," he notes. The University of Maryland's Frank Pasquale has general reservations about algorithmic ranking of workers, but he thinks its application to curb police misconduct could be beneficial. Pasquale says, "it could be seen be seen as an effort to correct an earlier algorithmic failure," referring to the CompStat system's tendency to reward police who get the most arrests.


Edinburgh Scientists Develop Software That Could Prove Essential to Homeland Security
Scotsman (United Kingdom) (08/03/16)

Researchers at Edinburgh University say they have developed software that instantly analyzes the breakdown of complex chemical mixtures. Security officers often use a handheld device known as a Raman spectrometer that can point a laser at a suspicious package or substance, which then measures reflected light and sends the data to a computer. The new software translates the data to provide an accurate description of all the chemicals contained in the sample. Edinburgh professor Mike Davies says the program is an especially powerful tool that could be used to detect chemicals used in improvised explosives or for giving accurate analysis of substances used in counterfeit drugs. "Our software can identify the entire nasty complex mixture and give an instant reading, whether it be every-day inflammable materials in improvised explosives, chemical warfare, or examining counterfeit drugs to see if they are 100 percent, diluted, or mixed," Davies notes. Edinburgh Research & Innovation's Angus Stewart-Liddon says the software has the potential to transform portable chemical analysis capability in the field and give instant results to the composition of chemical mixtures.


Georgia Tech Awarded DARPA Grant to Develop New IoT Protection
ZDNet (08/02/16) Charlie Osborne

The Georgia Institute of Technology (Georgia Tech) has received a $9.4-million grant from the U.S. Defense Advanced Research Projects Agency to develop a way to wirelessly protect small, low-power Internet of Things devices against malware. Researchers will monitor the devices by receiving and analyzing side-channel signals, which are electromagnetic emissions produced by semiconductors, capacitors, and other components. Devices that operate normally emit side-channel noise that can be recorded and compared against noise produced by infected devices. "When a processor executes instructions, values are represented as ones and zeroes, which creates a fluctuation in the current," says Georgia Tech professor Alenka Zajic. "That creates changes in the electromagnetic field we are measuring, providing a pattern for what each part of the program looks like on a spectrum analyzer." Current technology can detect these signals from up to half a meter away from an active device, but the team wants to expand this range to three meters. "We will be looking at how the program is changing its behavior," Zajic says. "If an Internet of Things device is attacked, the insertion of malware will affect the program that is running, and we can detect that remotely."


NSF Awards $15 Million to Create Science Gateways Community Institute
UC San Diego News Center (07/29/16) Jan Zverina

The U.S. National Science Foundation (NSF) has awarded a five-year, $15-million grant to a group led by the University of California, San Diego's San Diego Supercomputer Center (SDSC) to establish a Science Gateways Community Institute to accelerate the development and application of science gateways that address the needs of researchers across the full range of NSF directorates. "Gateways foster collaborations and the exchange of ideas among researchers and can democratize access, providing broad access to resources sometimes unavailable to those who are not at leading research institutions," says SDSC's Nancy Wilkins-Diehr. The award was part of a larger NSF announcement in which the agency earmarked $35 million to establish two Scientific Software Innovation Institutes to function as long-term hubs for scientific software development, maintenance, and education. The new institute is comprised of several component areas, including an incubator to deliver shared expertise in business and sustainability planning, cybersecurity, user interface design, and software engineering practices, as well as extended developer support to supply developers for up to 12 months to projects requesting aid that demonstrate the potential to realize the most significant impacts on their research communities. Other elements include a scientific software collaborative to administer a framework for gateway design, integration, and services.


Virginia Tech Receives $19.4 Million to Start Science Software Institute
Roanoke Times (VA) (07/29/16) Robby Korth

A team of researchers at Virginia Polytechnic Institute and State University (Virginia Tech) has received a $19.4-million grant from the U.S. National Science Foundation to establish the Molecular Sciences Software Institute. The institute, which will be led by Virginia Tech professor Daniel Crawford, will be a hub for research applying supercomputers to big simulation challenges. Crawford says the NSF funding will be channeled into software tools that generate "big computational models" for simulating chemical reactions that are too complex to study with conventional tools. Crawford says the institute's research team should be in place in the next 12 to 18 months, and within two years they should start developing software to work through algorithmic problems in order to maintain federal funding. Unlike other Virginia Tech research institutes, the new facility will be underwritten and operated by NSF, says Tech College of Science spokesperson Steven Mackay. Crawford wants the institute's researchers to engage with members of the community and with people across the U.S. to explain how big data can be used to address problems.


Blind Athlete Runs Desert Marathon Unassisted Using Smartphone App
Reuters (07/29/16) Matthew Stock

IBM researchers have developed eAscot, an application designed to help visually-impaired runners navigate on their own. A runner needs to know their bearing to run in the right direction, and the app notifies the user if they deviate from the desired bearing using beeps that increase in frequency as the distance from the desired direction increases. In addition, the pitch of the beep changes depending on the user's position, so it beeps at different pitches depending if the user veers off course to the left or to the right. The app uses satellite navigation and has a user interface similar to a car's parking sensor. The IBM researchers say the major challenge was developing an app that is a combination of a global-positioning system tracker and a car parking system. "So we kind of needed to come up with the math to detect how far you are off the track and to think about how often we need to do that and how quickly does the app need to respond," says IBM researcher Tim Daniel Jacobi. The app successfully helped guide a Boston Marathon participant through the course.


Caltech Scientists Improve Computer Graphics With Quantum Mechanics
Pasadena Now (CA) (07/27/16) Robert Perkins

Scientists at the California Institute of Technology (Caltech) have used the mathematics that govern the universe at the quantum level to simulate large-scale motion, a development they say could have an impact on computer-generated graphics. The researchers made some tweaks to the Schrodinger equation, which can be used to describe the motion of superfluids, to approximate fluids at the macroscopic level. "Since we are computer graphics folks, we are interested in methods that capture the visual variety and drama of fluids well," says Caltech professor Peter Schroder. "What's unique about our method is that we took a page from the quantum mechanics' 'playbook.'" Schroder says the new technique enables computers to more accurately simulate vorticity, the spinning motion of a flowing fluid. The team says the approach may be used to model real-world phenomena, such as the curling motion of a hurricane. "The Schrodinger equation, as we use it, is a close relative of the non-linear Schrodinger equation which is used for the description of superfluids," Schroder notes. "Their vorticity behavior is in many ways very similar to the behavior we can also observe in the macroscopic world." The researchers presented the technique at the ACM SIGGRAPH 2016 conference in Anaheim, CA.


How Vulnerable to Hacking Is the U.S. Election Cyber Infrastructure?
The Conversation (07/29/16) Richard Forno

Adversaries' growing use of cyberweapons to influence target groups in the U.S. is provoking concern that the U.S. electoral process is at risk due to a lack of trustworthiness in e-voting systems, writes University of Maryland, Baltimore County Cybersecurity senior lecturer Richard Forno. He says the vulnerability of such technologies is recognized by voting officials, and this vulnerability can be multiplied by e-voting's reliance on a distributed network. Securing this system initially requires tamper-proofing the "internals" of each voting machine at the point of manufacture, with each machine's software kept tamper-proof and accountable, along with the vote data it retains. Machines with flaws must be taken out of service and corrected, and once votes are culled from individual machines, the compiled results must be sent from polling places to higher election offices for official consolidation, tabulation, and final statewide reporting. This requires tamper-proof network connections that thwart interception or modification of the counts while in transit, and the software of state-level vote tabulation systems must be trustworthy, accountable, and resistant to unauthorized data modification. Other threats that must be factored in include those targeting the integrity of voter registration and administration systems, and human vulnerability. Forno says countering these dangers requires regularly applying best practices of cybersecurity, data protection, information access, and other objectively developed, responsibly deployed processes.


Vortex Laser Offers Hope for Moore's Law
UB News Services (07/28/16) Cory Nealon

Researchers at the University at Buffalo (UB) have used orbital angular momentum to advance laser technology, a breakthrough that could boost computing power and information transfer rates tenfold. The researchers say the light manipulation technique distributes the laser in a corkscrew pattern with a vortex at the center. The team was able to shrink the vortex laser to the point where it is compatible with computer chips. The researchers note because the laser beam travels in a corkscrew pattern, encoding information into different vortex twists, the vortex laser is able to carry 10 times or more information than conventional lasers, which move linearly. The researchers say the boost in computing power and information transfer rates provided by the light-based communication tool could solve the approaching data bottleneck, as well as help to reassure those who are fretting over the predicted end of Moore's Law. "To transfer more data while using less energy, we need to rethink what's inside these machines," says UB professor Liang Feng. The research was supported with grants from the U.S. Army Research Office, the Department of Energy, and the National Science Foundation.


The Brain Behind Google's Artificial Intelligence
Forbes (08/01/16) Peter High

In an interview, Jeff Dean, senior fellow of Google's Systems and Infrastructure Group, discusses the company's artificial intelligence (AI) agenda, with its current focus on machine learning systems. Dean cites much progress toward true artificial general intelligence in the last five or six years as people applied some of the more refined machine learning models. He says a key AI goal Google plans to pursue in the coming years is taking in hundreds or thousands of documents and then having a conversation about the documents' contents. "Perhaps the system will summarize the documents, or ask or answer questions about the contents of the documents," Dean says. "I think that is the level of understanding we will need to truly demonstrate high levels of language understanding." As the leader of Google's Brain group, Dean aims to develop large-scale computing systems to perform machine learning and advanced machine learning research. He notes the group's "sequence-to-sequence learning" model can be plotted onto many real problems, and has proved useful in language translation and other contexts. Dean says open source AI technology such as Google's TensorFlow machine learning library is helping drive machine learning community efforts.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe