Association for Computing Machinery
Welcome to the December 30, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Please Note: In observance of the upcoming U.S. holiday, TechNews will not be published on Monday, Jan. 2. Publication will resume Wednesday, Jan. 4.

ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Expect Deeper and Cheaper Machine Learning
IEEE Spectrum (12/29/16) David Schneider

Machine-learning technologies are undergoing a transformation into products that experts predict will be less expensive and more focused on deep-learning calculations. "Everybody is doing deep learning today," says Stanford University professor William Dally. One popular method in this area is the use of application-specific integrated circuits (ASICs), which Google uses in its Tensor Processing Unit. Field-programmable gate arrays are another tool being used; they have the benefit of reconfigurability with changing computing requirements. However, the most common technique in use involves graphics-processing units for parallel execution of mathematical operations. Dally cites three distinct deep-learning hardware application areas, including "training in the data center," where many neuronal links are adjusted so an artificial neural network can perform an assigned task. Another area is "inference at the data center," which entails the continuous operation of cloud-based neural networks that have previously been taught to execute some other task. Dally says the third core deep-learning operation is "inference in embedded devices" such as smartphones, tablets, and cameras, which will likely be handled by low-power ASICs as smartphone apps are increasingly augmented by deep-learning software. Dally, recipient in 2010 of the ACM/IEEE Eckert–Mauchly Award, notes software advances can quickly make hardware obsolete. "The algorithms are changing at an enormous rate," he says. "Everybody who is building these things is trying to cover their bets."


Divide and Conquer Pattern Searching
King Abdullah University of Science and Technology (12/28/16)

Researchers at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia say they have developed a pattern- or graph-mining framework with the promise of accelerating searches on massive network datasets by looking for recurrences. Extracting recurring patterns requires a major investment in time and computing resources, because graphs may contain hundreds of millions of objects and billions of relationships. "In essence, if we can provide a better algorithm, all the applications that depend on [frequent subgraph mining (FSM)] will be able to perform deeper analysis on larger data in less time," says KAUST research team leader Panagiotis Kalnis. His team has achieved a 10-fold acceleration over existing FSM processes with its ScaleMine system. "FSM involves a vast number of graph operations...so the only practical way to support FSM in large graphs is by massively parallel computation," Kalnis says. ScaleMine's two-step search first involves an approximation step to determine the search space and the optimal division of tasks, followed by a second computational step in which large tasks are divided dynamically into the optimal number of subtasks. "Hopefully this performance improvement will enable deeper and more accurate analysis of large graph data and the extraction of new knowledge," Kalnis says.


Election System Susceptible to Rigging Despite Red Flags
Associated Press (12/26/16) Michael Rubinkam; Frank Bajak; Tami Abdollah; et al.

The U.S. election system is fraught with vulnerabilities, including antiquated electronic voting machines that can be hacked without leaving a paper trail. Voting fraud concerns raised by Green Party presidential candidate Jill Stein are warranted, according to Stein and many computer experts, given charges that Russia interfered in the 2016 election. Researchers want the U.S. to fully transition to computer-scanned paper ballots, while those seeking a recount want forensic analysis of sampling of the digital voting machines. Despite assurances from election officials that rigging a presidential election is close to impossible, scientists say it is within the realm of possibility. Rice University's Dan Wallach suggests a team of professional hackers could surgically strike select counties in a few battleground states where "a small nudge might be decisive." One strategy outlined by University of Michigan scientist J. Alex Halderman would involve attackers probing election offices in advance to find exploitable areas, and injecting malware into machines in specific counties that would shift a small portion of the vote. They would then delete any digital clues to the hack after changing the election counts. The push for voting upgrades is sluggish due to the private sector's refusal to fund them because the market is tiny, while state and federal funding has dried up.


New Simulation Software Improves Helicopter Pilot Training
Technical University of Munich (Germany) (12/27/16) Stefanie Reiffert

Researchers at the Technical University of Munich (TUM) in Germany have developed simulation software to improve training for helicopter pilots via real-time computational analysis of both fluid mechanics and flight dynamics. TUM engineer Juergen Rauleder says simulators have failed to realistically model flying close to large objects because current programs for simulating wind conditions and equipment response follow a fixed pattern. Rauleder says the new numerical model his team developed "is extremely flexible and does not depend on stored flow data. We only have to enter the external conditions such as topography, global wind speeds, and the helicopter type. During the simulation, our algorithms use that data to continuously compute the interacting flow field at the virtual helicopter's current location." Rauleder notes the program also exerts the sensation of local air flows on the helicopter to pilots, so they can experience the impact of their control movements in a stress-free environment. The TUM team has validated its real-time simulation with established reference models. Real-world testing of the virtual models at sea is the next step, via cooperation with researchers at the U.S. Naval Academy, George Washington University, and the University of Maryland.


The Gender Gap in Computer Science Is Hurting U.S. Businesses
The Washington Post (12/27/16) Reshma Saujani; Julie Sweet

Women's share of the computer science workforce continues to fall despite efforts to expand computing education for children and young adults, according to research from Accenture and Girls Who Code. The computing-skills gap has widened, with 500,000 open computing jobs currently in the U.S. and fewer than 40,000 new graduates, 7,000 of whom are women. Women comprise less than 24 percent of the computing workforce, down from 37 percent in 1995, and that figure is expected to drop if changes are not made to the way computer science is introduced to girls. Research shows girls are 26 percent more likely to study computer science if they have a female teacher, while the teacher's gender does not matter for boys. Girls exposed to computer games at an early age also are four times more likely to enter the field. Once girls enter high school, it can be even more challenging to sustain their interest in computer science. It is important for girls to see female role models in the field, so female computer science teachers and mentors can help draw more girls into computing. To retain female computer science students in college, schools should partner with businesses and arrange on-campus speaking and mentorship programs featuring women.
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Novel Hardware-Based Modeling Approach for Multi-Robot Tasks
ScienceDaily (12/29/16)

Researchers at the Moscow Power Engineering Institute in Russia have released new findings in the emerging field of multi-robot cooperative system design. The researchers propose implementing a hardware-based modeling system for multi-robot collaborative tasks concentrating on the development and deployment phase of algorithm/system creation. Their strategy enables accelerating implementation iterations, ultimately leading to improved communication capabilities of research objects. The researchers focus not only on architecture and implementation of a research robot, but also on a communication system with parallel radio and infrared bidirectional data sharing, as well as on approaches to implementing a simulation toolchain. Thanks to advances in the development of general problems with single-robot control and basic multi-robot behavior, many researchers opted to study multi-robot coordination and deep cooperation behavior. As robots will be able to perform all necessary algorithmic steps, using tightly coupled modeling hardware and a simulation toolchain that transfers the full implementation of algorithms onto the hardware can yield certain advantages. "The new methods are attractive, as they integrate different new ideas concerning the algorithm design process, event-driven robot software design, and an autonomous mobile research robot equipped with an advanced sensor subsystem," notes Radu-Emil Precup, a professor at the Politehnica University of Timisoara in Romania.


Brain Activity Is Too Complicated for Humans to Decipher. Machines Can Decode It for Us
Vox (12/29/16) Brian Resnick

University of California, Berkeley professor Jack Gallant and others are using machine learning to mine neuroscientific data and gain revolutionary insights into how the human brain functions. Scientists are using artificial intelligence to analyze the data to find complex activity patterns that predict human perception, with a wealth of potential applications such as treating brain diseases. In one experiment, a researcher in Gallant's lab demonstrated the traditional view of the brain's language-processing function is overly simplistic. He scanned participants listening to a podcast with functional magnetic resonance imaging (fMRI), while an algorithm sought patterns in the data and produced an "atlas" of where individual words correspond in the brain. Gallant says such research can be fed into improved brain models, enabling scientists to better understand what is happening when the brain is distressed. He notes machine learning could be used to find patterns to diagnose mental disorders as well as to predict the onset of brain diseases so more effective therapies can be utilized. Neuroscientists use machine learning to either encode (anticipate the pattern of brain activity generated by stimuli) or decode (predict perception by studying areas of brain activity). An example of the latter is an experiment in which a researcher rebuilt faces that participants were looking at solely from fMRI data.


NTU to Develop Traffic Management Solutions So Drones Can Fly Safely in Singapore's Airspace
Nanyang Technological University (Singapore) (12/28/16) Lester Kok

Researchers at Nanyang Technological University, Singapore (NTU Singapore) are developing a system that will enable unmanned aerial vehicles (UAVs) to navigate safely within Singapore's limited airspace. The traffic management system will include designated air lanes, blocks, and virtual fencing to coordinate drone traffic. Coordinating stations for UAV traffic could be established across Singapore to track UAVs, schedule traffic flows, monitor drone speeds, and ensure safe flying distances. Computer simulations will test various scenarios to optimize UAV traffic routes and minimize congestion. To ensure drones do not enter restricted airspace zones, researchers are testing geofencing, in which virtual fences around restricted locations, such as airports or military facilities, could automatically reroute UAVs. The researchers also will propose broader safety standards and regulations as the UAV market continues to grow. "At NTU, we have already demonstrated viable technologies such as UAV convoys, formation flying, and logistics, which will soon become mainstream," says NTU professor Low Kin Huat. "This new traffic management project will test some of the new concepts developed with the aim of achieving safe and efficient drone traffic in our urban airways."


Investigations of the Skyrmion Hall Effect Reveal Surprising Results
Johannes Gutenberg University of Mainz (Germany) (12/27/16)

Researchers at Germany's Johannes Gutenberg University Mainz (JGU) and the Massachusetts Institute of Technology (MIT) say they have achieved a breakthrough in magnetic storage device technology. Their earlier work focused on structures that function as magnetic shift register or racetrack memory devices, offering low access times, high information density, and low energy consumption. The new research enables the billion-fold reproducible motion of special magnetic textures known as skyrmions between different positions. The experiments were conducted in vertically asymmetric multilayer devices with broken inversion symmetry and stabilizing the special spin structures called skyrmions. Skyrmions can be shifted by electricity and feel a repulsive force from the edges of the magnetic track and from single defects in the wire, so they can travel relatively undisturbed through the track. In addition, skyrmions not only move parallel to the applied current, but also perpendicular to it; this supports an angle between the skyrmion direction of motion and the current flow called the skyrmion Hall angle. The JGU/MIT team proved the billion-fold reproducible displacement of skyrmions is possible and can be realized with high velocities. The skyrmion Hall angle unexpectedly turned out to be reliant on skyrmion velocity, so elements of the motion parallel and perpendicular to the current flow do not scale equally with the velocity of the skyrmions.


Why Connecting All the World's Robots Will Drive 2017's Top Technology Trends
The Conversation (12/27/16) Tom Garner

Technological developments in 2016 offer guidance as to how 2017 will unfold, characterized by the continued evolution of virtual and augmented reality, the arrival of an Internet for artificial intelligence (AI), and the generation of personalized digital assistants, writes Tom Garner, a research fellow at the University of Portsmouth's School of Creative Technologies. He says virtual reality (VR) is currently on the brink of the "peak of inflated expectations," where hype surpasses reality and novelty is prioritized over quality. Some experts anticipate precipitous consumer disillusionment with VR while others expect a gradual decline, but the more persuasive argument is mobile phone-based VR platforms will help the technology reach equilibrium next year. Meanwhile, augmented reality (AR) promises to make strides in 2017 via its versatility as a digital information delivery platform, potentially giving people new and enhanced ways to access essential content and services. With current investment tapping breakthroughs in underlying AR technologies, the implication is the industry is preparing hardware to ensure the realization of this potential. Another forecast for 2017 is the domination of Internet of Things applications, as manifested by Cloud Robotics, in which robots performing specific tasks can share solutions to problems with each other. The convergence of these trends is expected to culminate in the advent of intelligent digital assistants that enable a naturalistic human-digital interface.


A Breakthrough in Miniaturizing Lidars for Autonomous Driving
The Economist (12/24/16)

Engineers at German chipmaker Infineon want to make laser-scanning systems viable and cost-effective for self-driving cars by shrinking them down in size. The company is using a microelectromechanical system (MEMS) consisting of a small oval mirror within a bed of silicon. The mirror is linked to actuators that use electrical resonance to make it oscillate laterally, changing the direction of the laser beam it is reflecting. Infineon says this enables the full power of the laser to be applied for scanning instead of its light being dispersed. The MEMS lidar is capable of scanning up to 5,000 data points from a scene each second, and has a range of 250 meters, notes Infineon's Ralf Bornefeld. The device is expected to cost an automaker less than $250. In comparison, commercial lidar systems can cost about $50,000 each, while smaller, lower-powered systems are currently available for about $10,000. Bornefeld and others speculate future autonomous cars will employ multiple miniature lidars, radars, ultrasonic sensors, and digital cameras, combined into a "safety cocoon" around the vehicle. However, other autonomous vehicle developers, such as Tesla's Elon Musk, have rejected lidar in favor of cameras, radar, and ultrasonic systems, which they say are improving rapidly.


New Approach Captures the Energy of Slow Motion
Penn State News (12/21/16) Walt Mills

Pennsylvania State University (PSU) researchers have developed a new approach for capturing energy currently wasted due to its characteristic low frequency. The team has designed a mechanical energy transducer based on flexible, organic, ionic diodes that could be used to turn low-frequency motion, such as human movement, wind, or ocean waves, into electricity. For example, the mechanical energy involved in touching the screen of a smartphone could be converted into electricity that can be stored in the device’s battery. The team envisions the ionic diode providing 40 percent of the energy required by the battery of next-generation smartphones. The device is composed of two nanocomposite electrodes with oppositely charged mobile ions separated by a polycarbonate membrane. Because the device is a polymer, it is both flexible and lightweight. The ionic diode's peak power density is generally larger than or comparable to those of piezoelectric generators operated at their most efficient frequencies. "Right now, at low frequencies, no other device can outperform this one," says PSU professor Qing Wang. Future work will entail further optimization and integration of the diodes into smartphones and tablets.


How Robots Will Change the American Workforce
The San Diego Union Tribune (12/21/16) Gary Robbins

Henrik Christensen, director of the University of California, San Diego's Contextual Robotics Institute, says in an interview he expects robots and automation to impact the U.S. workforce by bringing manufacturing jobs back from overseas, and also by displacing certain professions. He also predicts the eventual deployment of fully automated, driverless transportation in the U.S. will eliminate jobs such as truck and taxi drivers. The unavoidable automation of jobs for unskilled workers spurs Christensen to ask, "Can we retrain those people fast enough for the new jobs that will be created in areas like manufacturing?" Another prediction he makes is the evolution of robots to the degree that they will learn from the humans with whom they interact. "They're going to use potentially all of the data that's available about you," Christensen says. He also cites the failure to demonstrate hack-proof robots due to broad societal ignorance about privacy, but the general public is beginning to pay attention to the issue. "Unfortunately, I think things have to get a lot worse before they get better," Christensen says.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]