Welcome to the December 9, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.
ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
EEG Reveals Information Essential to Users
Aalto University (12/08/16)
Researchers at Finland's Helsinki Institute for Information Technology (HIIT) and the Center of Excellence in Computational Inference for the first time have demonstrated the possibility of retrieving information using an electroencephalogram (EEG) interpreted with machine-learning software. "The aim was to study if EEG can be used to identify the words relevant to a test subject, to predict a subject's search intentions, and to use this information to recommend new relevant and interesting documents to the subject," says HIIT researcher Tuukka Ruotsalo. The experiment involved recording test subjects' EEG data as they read the introductions of Wikipedia articles of their own choice. They then used the readings to model the key words they found interesting. Machine learning was used for modeling to address brain signal noise and to ensure identification of relevance and interest. "It is impossible to react to all the information we see," says HIIT researcher Manuel Eugster. "And according to this study, we don't need to; EEG responses measured from brain signals can be used to predict a user's reactions and intent." The researchers suggest brain signals could be applied to the prediction of other Wikipedia content that would interest users.
Search Engines 'Could Help Young People Find Best Mental Health Resources'
University of Strathclyde (12/08/16)
Researchers from the University of Strathclyde in the U.K. have found search engines and content providers could help young people find the most reliable mental health resources online. Although there are thousands of websites and applications relating to mental health available online, the researchers found that much of the most useful material was difficult to track down via traditional search engines. In addition, the researchers found young people have the worst access to mental health services of any group, despite their reputation for being tech-savvy. "Searches for mental health support may not lead to a health service site and they could find something which does not support them in a positive way," says Strathclyde professor Diane Pennington. "This could happen if they did a search which reflected the way they were feeling, or if they used a clinical term such as 'depressed.'" In addition, the researchers found the best results are often not listed first in search results. Pennington says search engines and content providers could help solve this problem by thinking about how they could make the most helpful material as visible as possible.
Further Improvement of Qubit Lifetime for Quantum Computers
Forschungszentrum Julich (12/08/16) Angela Wenzik
An international team of researchers say they have further improved the lifetime of superconducting quantum circuits. The researchers developed and tested a technique that removes unpaired electrons from the circuits. High error rates associated with previously available quantum bits (qubits) have limited the size and efficiency of quantum computers, and the researchers, led by Gianluigi Catelani at Germany's Peter Grunberg Institute found a way to prolong the time in which the superconducting circuits are able to store a "0" or a "1" without errors. The team includes researchers from the Massachusetts Institute of Technology, Lincoln Laboratory, the University of California, Berkeley, the RIKEN Institute in Japan, and the Chalmers University of Technology in Sweden. The researchers developed and tested a technique that can temporarily remove unpaired electrons from the circuit, with the help of microwave pulses. The process results in a threefold improvement in the lifespan of the qubits, and Catelani says the method "can in principle be put to immediate use for all superconducting qubits." The new technique means the quasiparticles are not permanently removed, but flow back again and again. The researchers overcame this problem by combing the microwave pulses with another method that permanently traps the quasiparticles.
Microsoft, Code.Org Target Beginner Coders With Minecraft Program
The Guardian (12/09/16) Peter Oluka
Microsoft and Code.org have released a new tutorial for Hour of Code, an annual campaign held during Computer Science Education Week to encourage more students to develop an interest in coding. The free online tutorial, called Minecraft Hour of Code Designer, lets a user program their own Minecraft game and learn basic computing concepts. Players face 12 challenges, culminating in the creation of a simple game they can share with friends. The program builds on the success of last year's tutorial, which was played by more than 30 million students worldwide. With the immense popularity of Minecraft, Microsoft and Code.org believe the tutorial has the potential to inspire millions more to try coding. Female players compose nearly half of the game's fan base, reflecting Microsoft's efforts to promote computer science among traditionally underrepresented groups in the field, particularly women and minorities. "I am inspired by the 'Minecraft' generation who view themselves not as players of a game, but as creators of the new worlds they dream up," says Microsoft CEO Satya Nadella. "This is the generation that will imagine, build, and create our future, and together we can equip them with the computational thinking and problem-solving skills to seize the opportunities ahead."
Meet the World's First Completely Soft Robot
Technology Review (12/08/16) Julia Sklar
Harvard University researchers have created the "octobot," the first self-contained soft robot. It has no hard electronic components and moves without being tethered to a computer. The robot moves by having hydrogen peroxide pumped into two reservoirs in the middle of its body. Pressure pushes the liquid through tubes inside the body, where it eventually hits a line of platinum, catalyzing a reaction that produces a gas. The gas expands and moves through a tiny chip called a microfluidic controller, which alternately directs the gas down half of the octobot's tentacles at a time. The alternating release of gas makes the robot wiggle its tentacles and move around, and the octobot can move for about eight minutes on one milliliter of fuel. The researchers note the octobot is made out of materials that most microfluidics labs have on hand. Going forward, they want to add sensing and programming capabilities that would provide more control over the robot's movements.
Learning Words From Pictures
MIT News (12/06/16) Larry Hardesty
Researchers from the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory (CSAIL) this week presented a new, transcription-free method for training speech-recognition systems at the Neural Information Processing Systems (NIPS 2016) conference in Barcelona, Spain. Their system analyzes correspondences between thematically related images and spoken descriptions of those images, as captured in a collection of audio recordings. By correlating acoustic properties with image characteristics, the system learns. The system was built using two distinct neural networks; one that takes images as input, while the other takes spectrograms in their component frequencies. The output of the top layer of each network is a 1,024-dimensional vector, while the final network multiplies the corresponding terms in the vectors together and sums them to produce a single number. The CSAIL researchers tested the system on a database of 1,000 images, with each image associated with a recording of a free-form verbal description. The team fed the system one of the recordings and requested it retrieve the 10 best-matching images. That set of images featured the correct one 31 percent of the time.
Lift Language Opens the Door to Cross-Platform Parallelism
InfoWorld (12/06/16) Serdar Yegulalp
Professors and students from the University of Edinburgh in the U.K. and Germany's University of Munster have proposed Lift, a new open source functional language for writing algorithms that run in parallel across a wide variety of hardware. Lift creates code for OpenCL, a programming language that targets central-processing units (CPUs), graphical-processing units (GPUs), and field-programmable gate arrays (FPGAs) to automatically generate optimizations for these hardware types. Although OpenCL can be specifically optimized to improve performance in different environments, those optimizations are not portable across hardware types, and code has to be optimized for CPUs and GPUs separately. As an intermediate language, Lift is intended to enable programmers to write OpenCL code by way of high-level abstractions that map to OpenCL concepts. When Lift code is compiled to OpenCL, it automatically is optimized by iterating through many possible versions of the code, and then testing the actual performance of each. Lift's approach enables the same distributed application to run on a wider variety of hardware and takes advantage of heterogeneous architectures. Google is supporting the new project.
Taking Back Control of an Autonomous Car Affects Human Steering Behavior, Stanford Research Shows
Stanford News (12/06/16) Taylor Kubota
A research team at Stanford University tested the handover of driving from autonomous cars to human drivers and found such a shift can be difficult for people if conditions have changed since the last time they were at the wheel. "There is this physical change and we need to acknowledge that people's performance might not be at its peak if they haven't actively been participating in the driving," says lead researcher and former Stanford graduate student Holly Russell. Participants drove a 15-second track composed of a straightaway and a lane change, then let the car take over and return them to the start. After repeating this task three more times, they drove the course 10 additional times under steering conditions altered to reflect changes in speed or steering that may occur while the car drives itself. "Even knowing about the change, being able to make a plan and do some explicit motor planning for how to compensate, you still saw a very different steering behavior and compromised performance," notes Stanford researcher Lene Harbott. She says these experiments are only the beginning of a process that self-driving car designers will need to refer to in order to make automated vehicle handover smoother and prevent accidents.
Wall-Jumping Robot Is Most Vertically Agile Ever Built
Berkeley News (12/06/16) Brett Israel
A small robot, designed by roboticists at the University of California, Berkeley, has demonstrated the highest robotic vertical jumping agility ever recorded. The robot Salto (saltatorial locomotion on terrain obstacles) can leap a meter into the air, spring off a wall, and perform multiple vertical jumps in a row. The researchers adapted the energy-storing process of a tendon of a galago (a small primate) to Salto to enable its high vertical agility. Its motor drives a spring, which loads via a leg mechanism to mimic the galago's crouch and enables Salto to jump without winding up beforehand. To measure vertical agility, the researchers developed a new metric, defined as the height that something can reach with a single jump, multiplied by the frequency with which the jump can be made. Salto's vertical jumping agility is 1.75 meters per second, short of the galago's 2.24 meters per second. "By combining biologically-inspired design principles with improved engineering technology, matching the agile performance of animals may not be that far off," says Berkeley professor Ronald Fearing. The researchers believe Salto and similarly agile robots could eventually be used to jump around rubble in search-and-rescue missions.
Spikes in Search Engine Data Predict When Drugs Will Be Recalled
New Scientist (12/05/16) Chris Baraniuk
Microsoft researchers have trained an algorithm to predict whether a drug will be recalled, using queries made through the Bing search engine. Microsoft Research Israel's Elad Yom-Tov trained a machine-learning algorithm on hundreds of millions of drug-related search queries made during the first 240 days of 2015 to find trends that correlated with recalls of those drugs. The data included searches that mentioned one of more than 300 pharmaceutical names that were searched at least 1,000 times. The researchers tested the algorithm on data from the rest of 2015 and found it could predict recalls of specific drugs a day or two in advance. The system might be able to predict recalls a month into the future "with reasonable accuracy," according to Yom-Tov. Internet search data has been used to track other trends in healthcare; for example, Yom-Tov published a paper several years ago showing how an automated analysis of search queries about specific drugs could reveal previously unknown side-effects. "Internet search logs are a great source because that's real time, but they're very unpredictable," says the European Bioinformatics Institute's Sirarat Sarntivijai, who was not involved in the research.
No Peeking: Humans Play Computer Game Using Only Direct Brain Stimulation
UW Today (12/05/16) Jennifer Langston
A University of Washington (UW) experiment shows how humans can interact with virtual realities via direct brain stimulation. As part of a two-dimensional computer game, test subjects navigated simple mazes based on the presence or absence of phosphenes. The researchers provided the visual inputs, which are perceived as blobs or bars of light, using a magnetic coil placed near the skull to directly and noninvasively stimulate a specific area of the brain. The subjects made the right moves in the mazes 92 percent of the time when they received the input via direct brain stimulation, compared to 15 percent of the time when they lacked that guidance; they also improved over time, which suggests they were able to learn to better detect the artificial stimuli. Virtual reality currently makes use of displays, headsets, and goggles, but ultimately the brain is what creates our reality, says UW professor and director of the Center for Sensorimotor Neural Engineering Rajesh Rao. "We look at this as a very small step toward the grander vision of providing rich sensory input to the brain directly and noninvasively," Rao says. "Over the long term, this could have profound implications for assisting people with sensory deficits while also paving the way for more realistic virtual reality experiences."
Is Your Favorite Ballplayer Hitting When It Matters, or Just Padding His Stats?
Johns Hopkins University (12/05/16) Arthur Hirsch
Johns Hopkins University researchers have added to the field of baseball statistics with the first analysis of hitters' performance when their team is either almost guaranteed to win, or is so far behind the game is out of reach, known as the Meaningless Game Situation (MGS). The researchers found some players can significantly improve their overall season statistics by maximizing their performance in those situations. The team views their data as a new kind of "split," similar to comparing players' performance in day games versus night games, or home games versus road games. The analysis is based on statistics from four Major League seasons representing a sample of more than 9,600 games. The MGS standard applies to progressively smaller leads as the game unfolds. For example, it is considered if one team has a seven-run lead in the first inning, a six-run lead in the second through seventh innings, a five-run lead in the eighth inning, or a four-run lead in the ninth inning or later. Using that scale, the researchers determined for all 30 Major League teams during the 2016 regular season there were 21,089 plate appearances by 781 hitters categorized as MGS.
U.S. Exascale Computing Update With Paul Messina
HPC Wire (12/08/16) Tiffany Trader
In an interview, Distinguished Argonne Fellow Paul Messina discusses stewardship of the Exascale Computing Project (ECP), which has received $122 million in funding (with $39.8 million to be committed to 22 application development projects, $34 million to 35 software development proposals, and $48 million to four co-design centers). Messina notes experiments now can be validated in multiple dimensions. "With exascale, we expect to be able to do things in much greater scale and with more fidelity," he says. Among the challenges Messina expects exascale computing to help address are precision medicine, additive manufacturing with complex materials, climate science, and carbon capture modeling. "The mission [of ECP] is to create an exascale ecosystem so that towards the end of the project there will be companies that will be able to bid exascale systems in response to [request for proposals] by the facilities, not the project, but the typical DOE facilities at Livermore, Argonne, Berkeley, Oak Ridge, and Los Alamos," Messina says. In addition to a software stack to meet exascale app needs, Messina says there should be a high-performance computing (HPC) software stack "to help industry and the medium-sized HPC users more easily get into HPC." Messina also stresses the need for exascale computing to be a sustainable ecosystem.
Abstract News © Copyright 2016 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]