Welcome to the June 12, 2017 edition of ACM TechNews, providing timely information for IT professionals three times a week.
ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
|
Israel's Quantum Leap
Jerusalem Post Judy Siegel-Itzkovich June 12, 2017
The Hebrew University of Jerusalem's (HU) Quantum Information Science Center (QISC) in Israel has received a $2.1-million government contract to construct a national quantum communications system demonstrator. The goal is to cultivate domestic Israeli expertise and technology for this system to prevent surveillance, protect data privacy, and secure national infrastructure. "This project...will position Israel in the leading edge of research toward ultimately secured communication systems," says HU professor Nadav Katz. QISC researchers will build a communication system at the HU's laboratories founded on single photons representing quantum bits. The Israel Defense Ministry, which is responsible for creating a secure communications infrastructure to improve privacy and safeguard national infrastructure, awarded the contract to QISC. QISC was started in 2013 and features more than 20 physics, computer science, mathematics, chemistry, philosophy, and engineering scientists. This team is advancing Israel's comprehension of quantum information science and the development of quantum technologies.
|
Self-Learning Robot Hands
Bielefeld University (Germany) June 8, 2017
Researchers at Bielefeld University in Germany have developed robot hands that can learn to grasp objects without previous knowledge of their characteristics, as part of a project at Bielefeld's Cluster of Excellence Cognitive Interaction Technology (CITEC). "Our system learns by trying out and exploring on its own--just as babies approach new objects," says Bielefeld professor Helge Ritter. The researchers are developing a two-handed robot whose appendages' shape and mobility are modeled after human hands. "The system learns to recognize such possibilities as characteristics, and constructs a model for interacting and re-identifying the object," Ritter says. The project integrates artificial intelligence research with other fields, and a human mentor instructs the robot in how it should handle objects. "The robot hands have to interpret not only spoken language, but also gestures," says CITEC's Sven Wachsmuth. "And they also have to be able to put themselves in the position of a human to also ask themselves if they have correctly understood."
|
Artificial Intelligence Can Now Predict Suicide With Remarkable Accuracy
Quartz Olivia Goldhill June 10, 2017
Vanderbilt University Medical Center's Colin Walsh and colleagues have developed machine-learning algorithms that can accurately predict suicide attempts, using data accessible from all hospital admissions. The researchers gleaned data on 5,167 Vanderbilt patients to train their computer to identify people at risk of attempted suicide versus individuals who committed self-harm but showed no evidence of suicidal intent. The team also built algorithms to anticipate attempted suicide among 12,695 randomly chosen patients with no documented history of suicide attempts. Trial results found the algorithms were 80-percent to 90-percent accurate in forecasting a person's attempted suicide within the next two years, and 92-percent accurate in predicting an attempt within the next week. Walsh's work raises ethical issues about the use of computer-based decisions in healthcare and the use of personal information in this field. "We mean well and built a system to help people, but sometimes problems can result down the line," Walsh says.
|
DeepMind Shows AI Has Trouble Seeing Homer Simpson's Actions
IEEE Spectrum Jeremy Hsu June 8, 2017
DeepMind has created a YouTube dataset of 300,000 video clips and 400 human action classes, designed to train its deep-learning algorithms to better identify human activities and behaviors. With the assistance of online workers via Amazon's Mechanical Turk service, DeepMind has correctly identified and tagged the actions in the YouTube clips in its Kinetics dataset. Although algorithms trained on the dataset were about 80-percent accurate in classifying actions such as "playing tennis," "crawling baby," "cutting watermelon," and "bowling," their accuracy fell to about 20 percent or less for actions performed by the animated character Homer Simpson. Meanwhile, a preliminary study found the dataset appears to be to fairly gender-balanced, with neither gender dominating within 340 out of the 400 action classes--otherwise it was impossible to determine gender in those actions. DeepMind wants outside researchers to help suggest new human action classes for the dataset, improving the identification accuracy of artificial intelligences trained on it.
|
New Computing System Takes Its Cues From Human Brain
Georgia Tech News Center Josh Brown June 8, 2017
Researchers at the Georgia Institute of Technology and the University of Notre Dame have created a computing system that utilizes a network of electronic oscillators to solve graph coloring tasks, a type of problem that tends to thwart modern computers. In creating a system that differs from conventional transistor-based computing, the researchers were inspired by the human brain, where processing is handled collectively, as in a neural oscillatory network, instead of with a central processor. The researchers found the electronic oscillators, fabricated from vanadium dioxide, had a natural ability that could be harnessed for graph coloring problems. When a group of oscillators was electrically connected via capacitive links, it automatically synchronized to the same frequency. Meanwhile, oscillators directly connected to one another would operate in different phases within the same frequency, and oscillators in the same group but not directly connected would sync in both frequency and phase.
|
How the Brain Recognizes What the Eye Sees
Salk News June 8, 2017
Salk Institute researchers have analyzed how neurons in the V2 section of the brain respond to natural scenes, providing a better understanding of human vision processing that could improve self-driving cars and sensory impairment therapies. The researchers developed a statistical method that takes complex responses to visual stimulation in the brain and describes them in interpretable ways, which could be used to help decode vision for computer-simulated vision. The researchers developed the model using publicly available data showing brain responses of primates watching movies of natural scenes. The team found V2 neurons process visual information according to three principles, including combining edges that have similar orientations, and boosting robustness of perception to small changes in the position of curves forming object boundaries. If a neuron is activated by an edge of a particular orientation and position, the orientation 90 degrees from that will be suppressive at the same location. Finally, relevant patterns are repeated in space in ways that can help perceive textured surfaces and boundaries between them.
|
DARPA Taps Intel for Graph Analytics Chip Project
TOP500.org Michael Feldman June 7, 2017
The U.S. Defense Advanced Research Projects Agency (DARPA) has selected Intel to develop a graph analytics processor that promises to be 1,000 times faster than the best conventional technology. The work is being conducted as part of DARPA's Hierarchical Identify Verify Exploit (HIVE) program, whose goal is to develop and integrate new graph hardware and software technologies for accelerating Department of Defense analytics workloads. DARPA also has enlisted the Pacific Northwest National Laboratory, the Georgia Institute of Technology, Northrup Grumman, and Qualcomm Intelligent Solutions to assist with the project. The agency says the HIVE effort adds the additional demand of real-time support for cases in which streaming data needs to be analyzed on the fly. DARPA notes much of the work associated with the HIVE program will focus on components outside the processor cores themselves, especially in the development of a memory architecture that supports a multi-node NUMA model.
|
Researchers Image Quasiparticles That Could Lead to Faster Circuits, Higher Bandwidths
Iowa State University News Service Mike Krapfl June 7, 2017
Researchers at Iowa State University have for the first time used spectroscopic studies to record exciton-polaritons as resonance peaks or dips in optical spectra at room temperature. "We are the first to show a picture of these quasiparticles and how they propagate, interfere, and emit," says Iowa State professor Zhe Fei. He notes the generation of exciton-polaritons at room temperature and their propagation characteristics are significant for developing future applications for the quasiparticles. Fei says the technology could be used to build nanophotonic circuits to replace electronic circuits for nanoscale energy or information transfer. He thinks the large bandwidth associated with nanophotonic circuits could be as much as 1 million times faster than current electrical circuits. The researchers found they could manipulate the properties of the exciton-polaritons by changing the thickness of the molybdenum diselenide semiconductor. "We need to explore further the physics of exciton-polaritons and how these quasiparticles can be manipulated," Fei says.
|
'Charliecloud' Simplifies Big Data Supercomputing
Los Alamos National Laboratory News Nancy Ambrosiano June 7, 2017
Researchers at Los Alamos National Laboratory (LANL) have developed Charliecloud, a program consisting of 800 lines of code to help supercomputer users operate in the high-performance environment of big data without burdening computer center staff with specific software needs. "Big data analysis projects need to use different frameworks, which often have dependencies that differ from what we have already on the supercomputer," says LANL'S Reid Priedhorsky. "So, we've developed a lightweight 'container' approach that lets users package their own user-defined software stack in isolation from the host operating system." Charliecloud builds container images by working on top of the open source Docker product that users install on their own system to customize software choices. Users then import the image to the designated supercomputer and execute their application with the Charliecloud runtime. "This is the easiest container solution for both system administrators and users to deal with," says LANL's Tim Randles.
|
Novel Seismic Software Sheds Light on Earthquake Paths
R&D Magazine Alexander Breuer June 6, 2017
Researchers at the University of California, San Diego, the San Diego Supercomputer Center, and Intel have developed Extreme-Scale Discontinuous Galerkin Environment (EDGE), software that has powered the fastest seismic simulation to date. EDGE's bottom-up design simulates multiple quakes in one execution of the software, saving modeling time and costs of the simulations by exploiting similarities in the earthquake setups. Faster simulations increase the resolved frequency range in the simulated seismic wave field, which is one of the most important accuracy factors. Frequency is measured in hertz (Hz), and since buildings and other structures resonate at a wide range of frequencies, seismic simulations must cover a broad frequency band beyond 10 Hz. The software can fully exploit the latest generation of processors, achieving a performance of 10.4 PFLOPS on the Cori Phase II supercomputer at the National Energy Research Scientific Computing Center on the campus of Lawrence Berkeley National Laboratory, which surpasses the previous seismic record of 8.6 PFLOPS executed on China's Tianhe-2 supercomputer.
|
Researchers Land $3 Million to Build Cyberattack Defenses
The Newsstand (SC) Paul Alongi June 6, 2017
Researchers at Clemson University are leading a team of computer experts from five universities with $3 million from the U.S. National Science Foundation to develop new defenses to protect data currently vulnerable to computer hacking. The researchers are developing S2OS, an operating system they say could fundamentally change how large computer and network systems are built, making the data stored in them and transmitted more secure. The new platform could be transformative for cloud computing. Although traditional security mechanisms are often fragmented, hard to configure, and hard to verify, the new system would build security directly into computer infrastructure. "The proposed work will lower the total cost of ownership for clouds, further unlocking economic and environmental benefits, as well as improving the security of today's clouds," says Clemson's Hongxin Hu.
|
ASU Professor: AI Will Reshape World
ASU Now Joe Kullman June 6, 2017
In an interview, Arizona State University professor and Association for the Advancement of Artificial Intelligence (AI) president Subbarao Kambhampati says he expects a societal restructuring facilitated by AI. "We should remain vigilant of all the ramifications of this powerful technology and work to mitigate unintended consequences," Kambhampati notes. In general, Kambhampati thinks some people's view of AI freeing them from work so they can engage in more intellectual and creative pursuits is overly optimistic. He emphasizes the need to consider AI's societal effects, and to take care to ensure the technology moderates and does not exacerbate social ills that include "societal biases, wealth concentration, and social alienation." Kambhampati also says AI scientists should consider how best to inform research and development. "For AI systems to work with humans, they need to acquire emotional and social intelligence, something humans expect from their co-workers," Kambhampati says. "That's where human-aware AI comes into play."
|
|