Association for Computing Machinery
Welcome to the February 3, 2017 edition of ACM TechNews, providing timely information for IT professionals three times a week.

A redesigned ACM TechNews is coming next week!

We look forward to introducing a new look-and-feel to ACM TechNews next week. TechNews will continue to offer the same timely information for technology professionals three times a week, but in a new responsive format designed to make it easier to read on any device by automatically adjusting to whatever way you interact with the newsletter. We hope you enjoy the new experience!

ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Israeli Scientist Wins Japan Prize for Cryptography Work
Times of Israel (Israel) (02/03/17)

Adi Shamir, a professor at the Weizmann Institute in Rehovot, Israel, is one of three winners of the 2017 Japan Prize, which honors achievement in science and technology. Shamir, who shared the 2002 ACM A.M. Turing award with Ronald Rivest and Leonard Adleman, was recognized for his contribution to information security through pioneering research on cryptography. "My main area of research is...making and breaking codes," Shamir says. "It is motivated by the explosive growth of computer networks and wireless communication. Without cryptographic protection, confidential information can be exposed to eavesdroppers, modified by hackers, or forged by criminals." In addition to Shamir, the Japan Prize Foundation also honored Emmanuelle Carpentier, director of the Max Planck Institute for Infection Biology in Germany, and Jennifer Doudna, a professor at the University of California, Berkeley, for their research in gene editing. Each winner will receive about $443,000 and will be honored in Tokyo on April 19.


Paralyzed Patients Communicate Thoughts Via Brain-Computer Interface
Reuters (01/31/17) Kate Kelland; Marina Depetris

A new noninvasive brain-computer interface (BCI) has enabled researchers in the U.K. to communicate with patients who are totally paralyzed and unable to talk. The initial study involved four patients with amyotrophic lateral sclerosis (ALS) who were considered completely locked-in--unable to move any part of their body voluntarily, including their eyes. However, the BCI cap enabled these patients to answer researchers' "yes" or "no" questions by thinking the answers. The BCI technique uses near-infrared brain spectroscopy and electroencephalography to measure the brain's blood oxygenation levels and electrical activity. The subjects were asked "yes" or "no" questions with known responses, such as the patient's husband's name, so the software could learn each individual's brain activity patterns for positive and negative responses. Questions with known answers elicited correct responses 70 percent of the time. Patients also were asked repeatedly over weeks of questioning if they were happy, and all four patients consistently responded "yes." The researchers are planning to develop the technology further and aim for it to be available to people with complete locked-in syndrome resulting from ALS, stroke, or spinal cord injury.


How to Improve Data Management in the Supercomputers of the Future
Carlos III University of Madrid (Spain) (02/01/17)

Researchers at the Carlos III University of Madrid (UC3M) in Spain say they have launched Cross-Layer Abstractions and Run-time for I/O Software Stack of Extreme-scale Systems (CLARISSE), a new project aimed at increasing the performance, scalability, programmability, and robustness of data management software that can keep pace with next-generation supercomputers. Data management scientists currently are struggling to design software that can handle the advanced requirements of new, high-performance supercomputers. In less than 10 years, supercomputers are expected to be two orders of magnitude faster than current systems. CLARISSE will explore redesigns of computational infrastructures and management software that can adapt to the rapid growth in data processing requirements. "A radical redesign of the computational infrastructures and management software is necessary to adapt [data management] to the new model of science, which is based on the massive processing of data," says UC3M professor Florin Isaila. He notes, data management software historically is developed in layers, involving little coordination in the global management of resources. "Nowadays, this lack of coordination is one of the biggest obstacles to increasing the scalability of current systems," Isaila says. "With CLARISSE, we research solutions to these problems through the design of new mechanisms for coordinating the data management of the different layers."


Watching Computers Think
Fraunhofer-Gesellschaft (02/01/17) Anne Rommel

Researchers at the Fraunhofer Heinrich Hertz Institute (HHI) in Germany have developed a new method that enables them to watch the work of neural networks in reverse. "We can see exactly where a certain group of neurons made a certain decision, and how strongly this decision impacted the result," says Fraunhofer HHI's Wojciech Samek. The researchers already have demonstrated, via several tests, that the method works. For example, they compared two programs that are publicly available on the Internet and are both capable of recognizing horses in images. The first program recognized the horses' bodies, while the second program focused on the copyright symbols on the photos, which pointed to forums for horse lovers. "It is conceivable, for instance, that the operating data of a complex production plant could be analyzed to deduce which parameters impact product quality or cause it to fluctuate," Samek says. He notes the new method has other applications that involve the neural analysis of large or complex data volumes. "In another experiment, we were able to show which parameters a network uses to decide whether a face appears young or old," Samek says.


1,000 Times More Efficient Nano-LED Opens Door to Faster Microchips
Eindhoven University of Technology (Netherlands) (02/02/17)

Interconnects within and between microchips increasingly are becoming a major bottleneck in the growth of data traffic. Although optical connections are seen as a potential solution, researchers have been unable to create a light source small enough to fit into the structure of a microchip and efficient enough to be a viable solution. Researchers at the Eindhoven University of Technology in the Netherlands have created a nano-light-emitting diode (LED) that is 1,000 times more efficient than its predecessors. One of the group's key innovations is its work integrating the light source and waveguide so less light is lost and more light can enter the waveguide. The nano-LED is integrated into a silicon substrate on a membrane of indium phosphide, which converts electrical signals into optical signals and can handle data speeds of several gigabits per second. The efficiency of the nano-LED currently is between 0.01 percent and 1 percent, but researchers plan to raise the device's performance with improved production methods. They think their nano-LED is a viable solution that will accelerate the growth of data traffic on chips, although they are cautious about the prospects.


Wearable AI System Can Detect a Conversation's Tone
MIT News (02/01/17) Adam Conner-Simons; Rachel Gordon

Researchers at the Massachusetts Institute of Technology (MIT) have developed a system that utilizes a wearable device and artificial intelligence to detect whether the tone of a conversation is happy, sad, or neutral based on a person's speech patterns and physiological activity. The prototype uses a Samsung Simband to collect physiological data, such as movement, heart rate, blood pressure, and skin temperature, while audio data is captured to analyze the speaker's tone, pitch, energy, and vocabulary during conversation. A neural network processes the mood of a conversation across five-second intervals, providing a "sentiment score." The team trained two algorithms using data gathered from 31 conversations of several minutes each. One algorithm classified the overall mood of a conversation as either happy or sad, and the other labeled each five-second increment as positive, negative, or neutral. The algorithm correctly identified several indicators of mood. For example, long pauses and monotonous vocal tones were associated with sad stories, while energetic and varied speech patterns correlated with happier stories. On average, the model was able to label the mood of each five-second interval with an accuracy that was 18 percent above chance, and 7.5 percent better than existing methods.


First Ever Blueprint Unveiled to Construct a Large Scale Quantum Computer
University of Sussex (United Kingdom) (02/02/17)

An international research team led by Winfried Hensinger, head of the Ion Quantum Technology Group at the University of Sussex in the U.K., has introduced the first-ever industrial blueprint for a large-scale quantum computer. The blueprint features a component that enables actual quantum bits to be transmitted between individual quantum computing modules to realize a fully modular system that can reach nearly arbitrary large computational processing powers. The researchers say the creation of electrical fields enabling ions to be shuttled between modules facilitates connection speeds that are 100,000 times faster than current state-of-the-art fiber-link methods. The next phase of the project will involve using the blueprint to build a prototype quantum computer at Sussex. The initiative is part of the U.K. government's agenda to devise quantum technologies for industrial use. It utilizes the Sussex team's recent invention to replace billions of laser beams in a large-scale quantum computer with the application of voltages to a microchip. "Without doubt it is still challenging to build a large-scale machine, but now is the time to translate academic excellence into actual application-building on the U.K.'s strengths in this groundbreaking technology," Hensinger says.


Atlanta Falcons to Win Super Bowl, Says Pitt Researcher
University of Pittsburgh News (02/01/17) Anthony Moore

The Atlanta Falcons will defeat the New England Patriots in Super Bowl 51, according to University of Pittsburgh (Pitt) researchers, who have developed a method that analyzes in-game data from the past seven National Football League seasons to predict the winner of the Super Bowl and to study the effectiveness of coaching decisions. The study analyzed 1,869 regular and postseason games from 2009 to 2015 and identified key in-game factors that directly correlate with winning probability. The researchers used a probability model to create a Football Prediction Matchup (FPM) engine to compare teams. They then ran 10,000 simulations of the game and found the Falcons have a 54-percent probability of winning the Super Bowl. An expanded version of the study explored on-field decision making, and found coaches are overly conservative in key situations, which reduces the team's winning probability percentage. "Increasingly, we are seeing NFL coaches and executives embracing analytics to improve their overall knowledge of the game and give them data-driven competitive advantages over their opponents," says Pitt professor Konstantinos Pelechrinis. "I believe this study is yet another step in that direction."


How Florida Is Helping Train the Next Generation of Cybersecurity Professionals
The Conversation (01/31/17) Sri Sridharan

A cybersecurity personnel shortage compounded by ballooning global demand for cyber skills is a problem that can only be addressed by academic-industrial collaboration and partnerships, according to Sri Sridharan, managing director of the University of South Florida's Florida Center for Cybersecurity (FC2). He cites FC2's efforts as a collaborative model for other states and communities. All 12 members of Florida's public university system participate in FC2, and Sridharan notes academia is a crucial resource for cybersecurity professionals. "As many as 84 percent of cybersecurity job postings require a bachelor's degree, and 83 percent need at least three years of experience," he says. The multidisciplinary nature of cybersecurity requires people with a mix of technical and nontechnical skills, and Sridharan says universities can ideally meet this challenge. The fluid nature of such skills is an additional factor because of evolving cyberthreats, spurring universities to overhaul their programs and classes to maintain their relevance and responsiveness. Sridharan says since FC2's launch, it "has helped to add more than 20 degree programs and graduate certificates in the various universities in the state university system." He also notes building interest to meet long-term cybersecurity demands is another aspect of FC2.


Data Centers of the Future Could Send Bits Over Infrared Lasers Instead of Fiber Optic Cables
IEEE Spectrum (01/31/17) Amy Nordrum

Pennsylvania State University professor Mohsen Kavehrad says he has built a prototype replacement for a data center's fiber-optic cable. The system uses infrared lasers mounted to the top of server racks that beam data to photoreceptors, with instant beam redirection and system reconfiguration available via tiny movable mirrors. Kavehrad demonstrated his "Firefly" system can deliver data rates of 10 Gbps, using lasers to produce an infrared signal of 1,550 nanometers that is subjected to wavelength division multiplexing and then focused through an inexpensive lens to another lens and photodiode receivers 15 meters away. The addition of the small mirrors powered by microelectromechanical systems creates a bidirectional connection. Kavehrad's team also transmitted a TV signal generated by feeding the entire 1 GHz cable TV band into the multiplexer, while at the other end they installed a TV to show the working channels. Kavehrad believes this setup could deliver bandwidth and throughput equal to if not better than current fiber-optic cables, routers, and switches, and it also should be able to accommodate terabytes of data. He notes with infrared lasers, data center operators should be able to reconfigure server racks to maximize energy efficiency.


The Numbers Don't Lie: Self-Driving Cars Are Getting Good
Wired (02/01/17) Alex Davies

The California Department of Motor Vehicles this week issued its annual list of disengagement reports from companies that received permits to test autonomous vehicles on public roads. Google's Waymo drove 636,000 miles in 2016 with 124 disengagements, down 19 percent from 2015. Most of the human interventions were the result of hardware or software discrepancies. General Motors' Cruise program recorded almost 10,000 miles, driving hundreds of miles without an incident by late last year. Delphi, which averaged 17.6 miles per disengagement, struggled with changing lanes in heavy traffic. Ford's two autonomous cars in California only drive on highways in good weather and road conditions, so the company's vehicles only reported three disengagements out of 590 miles driven. The goal of the reports is to create accountability for the new technology, but the logs are unscientific and offer an incomplete view of autonomous vehicles' safety and operating characteristics. Each disengagement involves several variables, which are recorded inconsistently. The logs do not note whether the cars are following a map or exploring an area for the first time, and they do not account for weather conditions and the proclivities of human operators.


Holograms From Sci-Fi Movies May Soon Become Reality
R&D Magazine (01/24/17) Kenny Walter

Researchers at the Australian National University (ANU) say they have developed a device that could generate the highest-quality holographic images ever produced and may lead to imaging technologies similar to those seen in science fiction films. "As a child, I learned about the concept of holographic imaging from the Star Wars movies," notes ANU's Lei Wang. "It's really cool to be working on an invention that uses the principles of holography depicted in those movies." Wang says the new invention produces the holographic images via infrared light. ANU professor Sergey Kruk says the device is comprised of millions of tiny silicon pillars up to 500 times thinner than a human hair. "This new material is transparent, which means it loses minimal energy from the light, and it also does complex manipulations with light," Kruk notes. "Our ability to structure materials at the nanoscale allows the device to achieve new optical properties that go beyond the properties of natural materials." Wang thinks the device could supplant bulky components to miniaturize cameras and save costs in space missions by reducing the size and weight of optical systems on spacecraft.


The Merging of Humans and Machines Is Happening Now
Wired.co.uk (01/27/17) Arati Prabhakar

The integration of human consciousness and machine capabilities is already underway, writes U.S. Defense Advanced Research Projects Agency (DARPA) director Arati Prabhakar. She envisions "a future in which humans and machines will not just work side by side, but rather will interact and collaborate with such a degree of intimacy that the distinction between us and them will become almost imperceptible." Prabhakar says symbiotic human-computer abilities are advancing thanks to innovations in adaptive signal processing and sensitive neural interfaces, machine reasoning, and complex systems modeling. She also notes such interaction is viewed with more concern than enthusiasm by the public. "What's drawing us forward is the lure of solutions to previously intractable problems, the prospect of advantageous enhancements to our inborn abilities, and the promise of improvements to the human condition," Prabhakar says. Challenges DARPA programs are trying to meet include research into modeling the underlying mechanisms of cancer and drug response, and building an analytical overview of factors contributing to global child malnutrition and obesity trends. Forces driving human-machine symbiosis include progress in neurotechnology, with potential applications in restoring damaged human functions and superhuman augmentations. Prabhakar says the changes wrought by human-computer integration will require a rethinking of individual identity, personal agency, and authenticity.


Abstract News © Copyright 2017 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]