Welcome to the November 11, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.
Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
The Fun Work of Technology Crystal Ball Gazing at SC16
HPC Wire (11/10/16) John Russell
Next week's International Conference for High Performance Computing, Networking, Storage and Analysis (SC16) in Salt Lake City, UT, will feature the first International Workshop on Post-Moore Era Supercomputing (PMES). Another panel, led by Jeffrey Vetter, director of Oak Ridge National Laboratory's Future Technology Group and recipient of the 2010 ACM Gordon Bell Prize, will further explore the ideas generated at that workshop. Vetter says the panel will talk about "potential opportunities and challenges for post-Moore technologies from the perspective of the (high-performance computing) community." He notes the papers highlighted at the workshop will focus on subjects ranging from neuromorphic computing to quantum computing to performance modeling for PMES systems. Vetter cites neuromorphic computers as systems that can possess outstanding performance and energy efficiency. Meanwhile, scalable quantum computing still remains an open research question that demands mission-critical applications and programming models. Vetter predicts complementary metal-oxide semiconductor devices will not become obsolete, "until unseated by some disruptive technology." He also says an important shift in memory systems to non-volatile memory is occurring, partly due to cost and energy efficiency concerns. Vetter cites programming systems and application performance portability as the most vital challenges for the HPC community.
Is No Secret Safe? Lipreading Robot Proves More Accurate Than a Human in Deciphering Speech
Daily Mail (United Kingdom) (11/09/16) Ryan O'Hare
Researchers from the University of Oxford in the U.K. have developed LipNet, a new program they say is more accurate at reading lips than human experts. The researchers found LipNet can determine what people are saying by reading their lips 93.4 percent of the time, while the average accuracy of an experienced lipreader is about 52 percent. LipNet uses a neural network to map mouth movements of people to a database of set sentences. The researchers trained LipNet with nearly 29,000 videos of two men and two women giving a strange series of commands, with cryptic phrases such as "set blue by A four please." LipNet learned to match the movements of people's mouths with the known commands by analyzing each individual video frame. Going forward, the researchers want to train LipNet with more real-world examples. "Machine lipreaders have enormous practical potential, with applications in improved hearing aids, silent dictation in public spaces, covert conversations, speech recognition in noisy environments, biometric identification, and silent-movie processing," says University of Oxford doctoral researcher Yannis Assael.
AI Experts Build 'Neural Network' to Help Researchers Search for Dugongs
ABC Online (Australia) (11/09/16) Nick Wiggins
Postdoctoral researcher Amanda Hodgson and her team at Australia's Murdoch University are studying tens of thousands of photos of water captured by aerial drones to search for dugongs and gauge their population, size, and location. Hodgson is working with Queensland University of Technology artificial intelligence expert Frederic Maire to accelerate the study with an artificial neural network. The network was created by Maire and a colleague using TensorFlow software, and it was trained to spot dugongs by having researchers upload images and identify the animals. The network is now scanning scores of photos without human assistance, and the dugong sightings it makes are confirmed by Hodgson's team. Maire says its current 80-percent detection rate should improve. "To train this deep neural network you need a lot of data," he notes. "Initially, we didn't have much data, so we do this incrementally. The more the neural network is provided with examples, the better it will get."
Computers Made of Genetic Material?
Helmholtz-Zentrum Dresden-Rossendorf (11/09/16) Simon Schmitt
Researchers at Germany's Helmholtz-Zentrum Dresden-Rossendorf (HZDR) research laboratory and Paderborn University used DNA-based nanowires to conduct electricity via deposition of gold-plated nanoparticles, a step toward the development of circuits based on genetic material. The nanowires were generated by combining a long strand of genetic material with shorter DNA segments through the base pairs to cohere a stable double strand. The researchers say this enables the structures to self-assemble into the desired configuration. "With the help of this approach, which resembles the Japanese paper-folding technique origami...we can create tiny patterns," says HZDR's Artur Erbe. "Extremely small circuits made of molecules and atoms are also conceivable here." Erbe says the gold-plated nanoparticles were chemically bonded to the nanowires via electron beam lithography. "We could thus very precisely determine the charge transport through individual wires for the first time," he notes. Testing demonstrated the current carried by the nanowires is reliant on the ambient temperature, and Erbe's team intends to add conductive polymers between the gold particles to improve conductivity.
Exascale Computing Project Awards $34 Million for Software Development
Inside HPC (11/10/16)
The U.S. Department of Energy's (DoE) Exascale Computing Project (ECP) on Thursday announced 35 software development proposals that will receive a combined $34 million in first-year funding. ECP is a collaborative initiative between DoE's Office of Science and the National Nuclear Security Administration, and a component of President Barack Obama's National Strategic Computing Initiative. Covered by the awards are numerous elements of the software stack for exascale systems, including programming models and runtime libraries, mathematical libraries and frameworks, tools, lower-level system software, data management and input/output, and in situ visualization and data analysis. "These software development awards are a major first step toward developing a comprehensive and coherent software stack that will enable application developers to productively write highly parallel applications that can portably target diverse exascale architectures," says ECP director Paul Messina. The exascale ecosystem that is part of ECP's development agenda seeks to meet DoE's scientific and national security requirements in the early-2020s timeframe. "The funding of these software development projects, following our recent announcement for application development awards, signals the momentum and direction of ECP as we bring together the necessary ecosystem and infrastructure to drive the nation's exascale imperative," Messina says.
Neuroscience Researchers Restore Leg Movement in Primates
News from Brown (11/09/16) Kevin Stacey
Brown University researchers are part of an international scientific team that used a wireless "brain-spinal interface" to circumvent spinal cord injuries in two rhesus macaques, restoring their intentional walking movement. "The system we have developed uses signals recorded from the motor cortex of the brain to trigger coordinated electrical stimulation of nerves in the spine that are responsible for locomotion," says Brown professor David Borton. "With the system turned on, the animals in our study had nearly normal locomotion." The interface used an electrode array implanted in the primates' brains to record motor-cortex signals. A wireless neurosensor transmits the signals collected by the array to a computer, which deciphers and reroutes them to an electrical stimulator in the lumbar spine, below the area of injury. That stimulation, delivered in patterns coordinated by the decoded brain, is fed to the spinal nerves governing locomotion. The knowledge gleaned from trials with healthy primates was combined with spinal maps developed by Swiss researchers to identify neural spinal hotspots that control locomotion. Testing with injured macaques demonstrated the system's viability. "If we truly aim for neuroprosthetics that can someday be deployed to help human patients during activities of daily life, such untethered recording technologies will be critical," Brown says.
Giant Machine Shows How a Computer Works
University of Bristol News (11/08/16)
Researchers from the University of Bristol in the U.K. have built the Big Hex Machine, a giant, fully operational 16-bit computer that aims to help non-experts see how the mechanisms of computation work. The Big Hex Machine was built out of more than 100 specially designed four-bit circuit boards and will help teach students about the fundamental principles of computer architecture. The system will be used as part of this year's computer architecture unit and will enable students to be creative with what is traditionally seen as a complicated subject. The Big Hex Machine "demonstrates the principle used in all computers--general-purpose hardware controlled by a stored program," says Bristol professor David May. The wall-mounted computer measures more than eight square meters, and includes the processor, input and output devices, a custom-built light-emitting diode matrix, a Web-based application to control its operation, and a complete toolchain for students to write, build, and execute their own software. "Building such a machine was not a trivial task," says Bristol researcher Richard Grafton. "It's a result of a great collaboration between students and staff and a real testament to persistence, commitment, and teamwork."
Building the Foundation for AI-Enabled Computer Vision
Government Computer News (11/08/16) Mark Rockwell
Researchers from Sandia National Laboratories and the U.S. Intelligence Advanced Research Projects Activity are working on the Machine Intelligence from Cortical Networks (MICrONS) project to improve machine learning by combining neuroscience and data science to reverse-engineer the human brain's processes. The researchers want to develop algorithms that can recognize visual subtleties the human brain can instantly understand. In addition, Sandia officials are planning to judge the brain algorithm replication work of three university-led teams, which will map the complex wiring of the brain's visual cortex and produce algorithms that will be tested over the next five years. The teams will use different techniques to map the visual cortex, with the goal of generating new models of brain function. The five-year objective is for the researchers to create an artificial intelligence capability that can recognize and classify unknown objects. The MICrONS project is part of the White House's Brain Research Through Advancing Innovative Neurotechnologies "grand challenge," which aims to revolutionize understanding of the human mind and uncover new ways to treat, prevent, and cure brain disorders. The research also could be used to improve how computer algorithms perform, and how national security and intelligence analysts find patterns in massive datasets.
Deep Neural Network Learns to Judge Books by Their Covers
Technology Review (11/07/16)
Researchers in Japan are developing an artificial intelligence system that can design book covers without human assistance using a machine-vision algorithm that can deduce a book's genre by its cover. Researchers Brian Kenji Iwana and Seiichi Uchida at Japan's Kyushu University trained a deep neural network by first downloading 137,788 unique book covers from Amazon.com along with each genre. They then employed 80 percent of the dataset to educate the four-layer, 512-neuron network to identify the genre by the cover image. An additional 10 percent of the dataset was used to validate the model, and then the algorithm was tested on the remaining 10 percent. Iwana and Uchida say the network listed the correct genre in its top three choices more than 40 percent of the time and identified the precise genre more than 20 percent of the time. "This shows that classification of book cover designs is possible, although a very difficult task," the researchers say. A comparison between the network's genre recognition performance and that of humans has yet to be conducted. A likely outcome of this experiment is the network being used to train machines to design book covers free of human input.
Students Worldwide Competed to Improve Security Software in a Contest Led by UMD
Diamondback (MD) (11/06/16) Rachel Kuipers
The Build It, Break It, Fix It security contest, presented by the Maryland Cybersecurity Center and Booz Allen Hamilton, aimed to demonstrate to students how to build more secure software. The contest began Sept. 22 and included students from around the world, including those in the Coursera program, which enables students to take online courses from universities worldwide. Participants were instructed to design a secure data management platform that could store information and permit certain people to have varying levels of access. Build-It teams constructed the software in the first round, and Break-It teams looked to exploit faults in the software in the second round. The final round, which began Oct. 20 and ended Oct. 31, tasked teams with fixing any problems in their software. The three winning teams each received a portion of the $13,500 in prize money. "We want to make software security better [and] help developers who aren't security experts do a better job of writing secure software," says Michelle Mazurek, a University of Maryland cybersecurity professor and member of the Maryland Cybersecurity Center. "There's a gap between what seems like it should work and what actually should work in the real world."
Driverless-Vehicle Options Now Include Scooters
MIT News (11/07/16) Larry Hardesty
Researchers have developed an autonomous mobility scooter using the same sensor configuration and software in previous trials of autonomous cars and golf carts. The scooter was designed by researchers from the Massachusetts Institute of Technology's (MIT) Computer Science and Artificial Intelligence Laboratory, the National Library of Singapore, and the Singapore-MIT Alliance for Research and Technology (SMART). Trials of the scooter demonstrated the control algorithms' ability to work indoors as well as outdoors. Low-level control algorithms enable the vehicles to respond immediately to changes in their environment, while route-planning, localization, and map-building algorithms enable them to determine their location and route. In addition, a scheduling algorithm allocates fleet resources, and an online booking system lets users schedule rides. Using the same control algorithms for scooters, golf carts, and cars means information acquired by one vehicle can be shared with others. Software uniformity also translates to greater flexibility in the allocation of resources. The researchers now are working on equipping vehicles with machine-learning systems to improve the performance of the navigation and control algorithms.
Brain 'Reads' Sentences the Same in English and Portuguese
CMU News (11/03/16) Shilo Rea
An international research team led by Carnegie Mellon University (CMU) found when the brain "reads" or decodes a sentence in English or Portuguese, its neural activation patterns are identical. The team used a machine-learning algorithm to comprehend the relationship between sentence meaning and brain activation patterns in English and then identify sentence meaning based on activation patterns in Portuguese. The study involved 15 native Portuguese speakers--eight bilingual in Portuguese and English--reading 60 sentences in Portuguese in a functional magnetic resonance imaging scanner. A computational model predicted which sentences the participants were reading in Portuguese, based on activation patterns. The model uses 42 concept-level semantic features and six markers of the concepts' roles in the sentence to identify brain activation patterns in English. The model anticipated which sentences were read in Portuguese with 67-percent accuracy. The brain images visualized the activation patterns for the 60 sentences in the same brain areas and at similar intensity levels for both English and Portuguese sentences. "Knowing this means that brain-to-brain or brain-to-computer interfaces can probably be the same for speakers of all languages," says CMU professor Marcel Just. He notes the study could potentially improve machine translation, brain decoding across languages, and second-language instruction.
Face Electrodes Let You Taste and Chew in Virtual Reality
New Scientist (11/04/16) Victoria Turk
Researchers at the National University of Singapore created a spoon embedded with electrodes that can amplify the salty, sour, or bitter flavor of real food, while a later project used thermal stimulation to mimic the sensation of sweetness. Users place the tip of their tongue on a patch of thermoelectric elements that are rapidly heated or cooled, tricking the thermally sensitive neurons that contribute to the sensation of taste. A separate team from the University of Tokyo produced a system that can emulate the dining experience by focusing on the texture and consistency of different foods. Electrodes are placed on the jaw's masseter muscle to replicate sensations of stiffness or chewiness as a user bites down. To give the virtual food a harder texture, the muscle is stimulated with a higher frequency, whereas longer electric pulses create an elastic consistency. Researchers plan to develop the system further by targeting additional muscles in the jaw. The University of Tokyo's Arinobu Niijima says taste and texture technologies could be combined to create a multisensory dining experience for people with dietary restrictions. "We wish to help them to satisfy their appetite and enjoy their daily life," Niijima says.
Abstract News © Copyright 2016 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]