Association for Computing Machinery
Welcome to the January 15, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Please note: In observance of the Martin Luther King, Jr. Day holiday, TechNews will not be published on Monday, Jan. 18. Publication will resume on Wednesday, Jan. 20.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).


U.S. Proposes Spending $4 Billion on Self-Driving Cars
The New York Times (01/14/16) Bill Vlasic

The Obama administration on Thursday promised to accelerate regulatory guidelines for driverless cars and make an investment in research to commercialize them. U.S. Transportation Secretary Anthony Foxx said, "we are bullish on autonomous vehicles" at the North American International Auto Show. He pledged the government will remove obstacles to their development, as well as setting further guidelines within six months concerning functions the vehicles must perform to be deemed safe. Foxx said the president's proposed budget for the next fiscal year will include $4 billion to underwrite research projects and infrastructure improvements associated with driverless cars. He cited autonomous vehicles' potential to reduce traffic accidents and improve road safety. Executives at Google and other firms developing the technology welcomed the announcement. "It takes real collaboration with our regulators so this is done right and done safely," notes General Motors' Mark Reuss. Foxx said the government is authorized to permit limited deployment of 2,500 driverless vehicles by a lone company for a two-year period, and he called on firms to solicit interpretations of existing federal vehicle standards from regulators for new technologies under development. Foxx also emphasized liability issues and other matters related to autonomous cars the government must address over the next six months.
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 

Yahoo! Releases a Ton of Anonymized User Data to Help Machine-Learning Scientists
IDG News Service (01/14/16) James Niccolai

Yahoo! on Thursday released what it calls the "largest ever" data set to machine-learning researchers, totaling about 110 billion anonymized user interactions with news streams on sites such as Yahoo! News. The 13.5-terabyte dataset, which was accumulated over four months from 20 million Yahoo! users, is needed by researchers to test and improve models that guide machine-learning systems. Yahoo!'s Suju Rajan says artificially generated datasets lack the untidiness and unpredictable behavior typical of humans in their online interactions. "Real-world data is messy, it presents a lot of challenges, and those challenges aren't necessarily thought of when someone creates an artificial data set," she notes. "If you don't take my behavior into account, the algorithm you create might not work so well." Rajan thinks scientists will apply the data to help build more capable recommendation engines. She also envisions it driving other areas of research, such as information retrieval, social feed ranking, and systems engineering, by helping cloud providers decide how to process data as users engage with it. The dataset includes interaction data and demographic information for a user subset, as well as titles, summaries, and key phrases of the related news articles.

Microsoft Neural Net Shows Deep Learning Can Get Way Deeper
Wired (01/14/16) Cade Metz

A Microsoft researcher team won the ImageNet Large Scale Visual Recognition Challenge in December with a new approach to deep learning. The researchers designed a "deep residual network," a neural network that spans 152 layers of mathematical operations, compared to six or seven for typical designs. The researchers note the neural network is better at recognizing images because it can examine more features. The neural net suggests that in the years to come, companies such as Microsoft will be able to use graphics processing units and other specialized chips to significantly improve image recognition, as well as other artificial intelligence services, including speech recognition and even understanding language as humans naturally speak it. The neural network is designed to skip certain layers when it does not need them, but use them when it does. "When you do this kind of skipping, you're able to preserve the strength of the signal much further, and this is turning out to have a tremendous, beneficial impact on accuracy," says Peter Lee, Microsoft's head of research. Microsoft also designed a system that can help build these networks.

New Lab to Give Nation's Researchers Remote Access to Robots
Georgia Tech News Center (01/13/16)

The Georgia Institute of Technology (Georgia Tech) is building the Robotarium, a new laboratory that will enable roboticists to conduct experiments remotely. Researchers from other universities, as well as middle school and high school students, will schedule experiments, upload their programming code, watch the robots in real time via streamed video feeds, and receive scientific data demonstrating the results. The lab is expected to house up to 100 ground and aerial swarm robots. "We need to provide more access to more people in order to continue creating robot-assisted technologies," says Georgia Tech professor Magnus Egerstedt. "The Robotarium will allow that." He also says the laboratory has the potential to build stronger networks of collaborative research, showing how remote access instruments can be applied to other areas beyond robotics. The U.S. National Science Foundation is helping to fund the project with two grants totaling $2.5 million. One of the grants will transform an existing classroom into the new lab, while the other will help create safe and secure open-access systems for the remote lab.

CCC Whitepaper--Smart Communities Internet of Things
CCC Blog (01/13/16) Helen Wright

The Computing Community Consortium (CCC) Computing in the Physical World Task Force has released another community whitepaper on Smart Communities Internet of Things (IoT). The whitepaper highlights the benefits and challenges of cyber technologies for the physical infrastructure and human stakeholders in smart cities, and offers recommendations. The task force says there is an urgent need for funding for basic computer science research, development, and deployment in order to create novel IoT solutions and the necessary cyberinfrastructures for smart cities. The whitepaper also says more money is needed to support the development of partnerships involving cities, academia, and industry in order to build IoT-experimental zones and testbeds, integrate existing IoT infrastructures, and develop new joint IoT cyberinfrastructures. There also is a critical need for continuous funding to keep embedded IoT infrastructures up to date, secure, and enabled for innovation. The task force says IoT solutions will need to last decades and continue operating as technology constantly changes.

Can Crowdsourcing Decipher the Roots of Armed Conflict?
Government Computer News (01/13/16) Stephanie Kanowitz

Researchers at Pennsylvania State University (PSU) and the University of Texas at Dallas, working as part of the Correlates of War project, are experimenting with crowdsourcing as a way to more quickly and inexpensively create a global conflict database that could help explain when and why countries go to war. The researchers also developed a new type of technology that uses machine learning and natural language processing to efficiently, cost-effectively, and accurately create databases from news articles that detail militarized interstate disputes. For the most recent iteration, the researchers used crowdsourcing instead of working with paid subject-matter experts, and found the results were similar, but they come in much faster and are less costly. As news articles come in, the researchers pull them and formulate questions that help evaluate military events. The articles and questions are loaded onto a crowdsourcing platform and assigned to readers, and the project assigns the same article to multiple workers and uses algorithms to combine the data into one annotation. A systematic comparison of the crowdsourced responses with those of trained subject-matter experts found the crowdsourced work was accurate for 68 percent of the news reports coded. In addition, the aggregation of answers for each article showed common answers from multiple readers strongly correlated with correct coding, enabling the researchers to flag the articles that required deeper analysis.

Brookhaven Lab Expands Computational Science Initiative
Brookhaven National Laboratory (01/12/16) Peter Genzer

The U.S. Department of Energy's Brookhaven National Laboratory has expanded its Computational Science Initiative (CSI), which enables it to further research big data challenges at experimental facilities and expand the frontiers of scientific discovery. Recent advances in computational science, data management, and analysis have been instrumental in helping Brookhaven's scientific programs at the Relativistic Heavy Ion Collider, the National Synchrotron Light Source (NSLS), and the Center for Functional Nanomaterials. "Our mission is to foster cross-disciplinary collaborations to address the next generation of scientific data challenges posed by facilities such as NSLS' successor, the new National Synchrotron Light Source II," says CSI director Kerstin Kleese van Dam. CSI will focus on research, development, and deployment of new methods and algorithms for analyzing and interpreting high-volume, high-velocity, heterogeneous scientific data created by experimental, observational, and computational facilities to accelerate and advance scientific discovery. A key aspect of the new initiative is the Computer Science and Mathematics effort, which will focus on fundamental research into novel methods and algorithms in support of hypothesis-driven streaming data analysis in high-data-volume and high-data-velocity experimental and computing environments. Meanwhile, CSI's Computational Science Laboratory (CSL) is a new collaborative institute for novel algorithm development and optimization. CSL will support the development of advanced simulation codes in materials science, chemistry, quantum chromodynamics, fusion, and large eddy simulations.

Four U.K. Universities Receive 4m Pounds to Drive IoT Sensor Development (01/12/16) Roland Moore-Colyer

Imperial College London and the universities of Glasgow, Liverpool, and St. Andrews have formed the Science of Sensor Systems Software (S4) project, where they will share expertise in computing, engineering, and mathematics. The group has received a 4-million-pound grant from the Engineering and Physical Sciences Research Council to develop sensor systems for use in smart cites, Internet of Things (IoT) networks, big data collection, and driverless cars. The grant will be used to develop new principles and techniques for sensor system software so researchers and policy makers can make better use of data harvested from sensor networks. S4 also aims to improve the accuracy of answers and data collected from expanding networks of sensors by improving the reliability of the systems. The researchers will look for a more unified approach to get the most out of the growing number of networked sensors being deployed, according to University of Glasgow professor Muffy Calder. By the end of the project, the researchers will have answered several fundamental questions about how to design, deploy, and reason about sensor-based systems. In addition, the researchers want to develop new principles, techniques, and tools to be used with simulations and physical sensor testbeds for experimentation.

Virtual Reality for Motor Rehabilitation of the Shoulder
Carlos III University of Madrid (Spain) (01/11/16)

Carlos III University of Madrid (UC3M) researchers have developed a virtual-reality (VR) system for motor rehabilitation of the shoulder. The prototype, which is equipped with a movement sensor, enables the user to do controlled exercises as part of a football (soccer) game. The system consists of software developed in the motor of a multiplatform videogame combined with Intel RealSense, a movement sensor that was recently launched to developers, and Oculus Rift DK2 VR goggles, through which users can see the program and check which movements they are performing. "The patients act as goalkeepers in a football match and they have to stop the balls that are kicked, so they have to make exact movements," says UC3M researcher Alejandro Baldominos. The first version of the prototype was developed for use in rehabilitation centers. "To help maintain the correct position in each save, the patients see the reflection of their hand [with the rest of the arm hidden], which improves the effect of the propioception, which is the sense that tells the body what position the muscles are in," Baldominos notes. Going forward, the researchers will carry out clinical trials and develop new programs that help rehabilitate other shoulder movements.

3D Mapping of Entire Buildings With Mobile Devices
ETH Zurich (01/13/16) Fabio Bergamin

ETH Zurich professor Thomas Schops and colleagues have developed software that they say makes it easy to create three-dimensional (3D) models of buildings. The software is designed to run on a new type of tablet computer from Google's Project Tango and can generate 3D maps in real time. The team's purely optical method is based on comparing multiple images taken on the tablet by a fisheye lens, and using the principle of triangulation in a manner similar to the way it is applied in geodetic surveying. The software analyzes two images of a building's facade, which were shot from different positions. For each pixel in an image, the software searches for the corresponding element in the other. The software can determine how far picture elements are from the device and can use this information to generate a 3D model of the object. Real-time feedback is possible because all of the calculations are performed directly on the tablet and the device has high processing power. Schops says the software also has potential applications in surveying entire districts, enabling cars to automatically detect the dimensions of parking spaces as well as in virtual-reality computer games and augmented reality.

Narrowing Gap Between Man and Machine
ASU News (01/12/16) Erik Wirtanen

Arizona State University professor Heni Ben Amor believes human-robot collaboration is important and wants to help robots better understand and respond to human behavior. Human skill is key for many tasks, but other tasks would benefit from the strength and agility of robots, according to Ben Amor. "To ensure safe interaction, autonomous robots need to include movements and actions of human partners into their decision-making process," he says. Ben Amor wants to investigate bi-manual grasping and manipulation in an effort to increase robots' dexterity to a level that matches the ability of humans using two hands and arms. He also is intrigued by machine learning involving robots and the learning capabilities of biological systems. Ben Amor wants to determine if robots can learn to solve tasks on their own, by employing a human-like trial-and-error strategy to acquire new motor skills. He also says he is particularly fascinated with the idea of having a robot read the manual for IKEA furniture and program itself to do the assembly.

U of T Computer Scientist Receives International Award for Pushing Frontiers of Knowledge
U of T News (01/12/16)

University of Toronto (U of T) professor Stephen Cook has won the BBVA Foundation Frontiers of Knowledge Award in the Information and Communications Technologies category for his work on computational complexity. In his career, Cook expanded on mathematician Alan Turing's concept of computability to include efficiency in order to understand which problems are worth trying to solve and which are not. In addition, Cook defined a class of "hardest problems," known as NP-complete, such that solving one efficiently would mean all other NP problems were similarly solvable. "[Cook's] work has had global impact and the fundamental results of his decades of research continue to be at the absolute forefront of theoretical computer science," says U of T professor Ravin Balakrishnan. Cook's seminal paper on the complexity of theorem-proving procedures had a great impact on how scientists think about which problems can be computationally solved in a reasonable amount of time. Cook also has made contributions to computational theory, algorithm design, programming languages, and mathematical logic. He received the ACM A.M. Turing Award in 1982, and his experiments are now among the essential theoretical results all computer science graduates must understand.

NCAR Announces Powerful New Supercomputer for Scientific Discovery
National Center for Atmospheric Research (01/11/16) David Hosansky; Jeff Smith

The National Center for Atmospheric Research (NCAR) plans to install a new supercomputer this year at the NCAR-Wyoming Supercomputing Center (NWSC). Silicon Graphics International will build the 5.34-petaflop system, which has been named Cheyenne. The high-performance computer will be capable of more than 2.5 times the amount of scientific computing performed by NCAR's current Yellowstone supercomputer. DataDirect Networks will provide the centralized file system and data storage components. Cheyenne also will make use of Intel Xeon processors, whose performance will be augmented via optimization work that has been done by NCAR and the University of Colorado Boulder. Cheyenne is expected to be operational at NWSC in Cheyenne, WY, at the beginning of 2017. Cheyenne's new data storage system will be integrated with NCAR's GLADE file system, and will provide an initial capacity of 20 petabytes, expandable to 40 petabytes with the addition of extra drives. The new storage system also will transfer data at the rate of 200 Gbps, more than twice as fast as the current file system's rate of 90 Gbps. The U.S. National Science Foundation and the state of Wyoming are providing funding for Cheyenne, which will be used to study climate change, severe weather, geomagnetic storms, seismic activity, air quality, wildfires, and other geoscience topics.

Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.

To submit feedback about ACM TechNews, contact:
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe