Association for Computing Machinery
Welcome to the November 9, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Toyota Invests $1 Billion in Artificial Intelligence in U.S.
The New York Times (11/06/15) John Markoff; Steve Lohr

Toyota on Friday announced a five-year, $1-billion investment to establish an artificial intelligence (AI) research laboratory in Silicon Valley called the Toyota Research Institute (TRI). The facility will initially concentrate on AI and robotics, investigating how humans move both indoors and outdoors so such knowledge can be applied to mobility for an aging population. The center also will prioritize technologies intended to improve driving safety, as opposed to a complete transition to driverless cars. Toyota's research focus reflects a rush to develop AI products and services thanks to the advent of successful offerings after years of stagnant development by labs in this sector. Also observed in recent years is a race to acquire talented machine-learning scientists, 200 of whom Toyota plans to hire for the new AI lab. "The density of people doing this kind of work in Silicon Valley is higher than any other place in the world," says the lab's director and former U.S. Defense Advanced Research Projects Agency official Gill Pratt. He also says Toyota plans to incorporate AI technologies and data into its factory automation systems. "There may also be advances in robot perception, planning, collaboration, and electromechanical design from TRI that will translate into improvements in manufacturing robotics," Pratt notes.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Where Computing Hits the Wall: 3 Things Holding Us Back
Government Computer News (11/05/15) Mark Pomerleau

Jason Matheny, director of the U.S. Intelligence Advanced Research Projects Activity (IARPA), recently warned that in order for data analytics to come into its own, significant challenges in three areas still need to be addressed. The first of these areas, according to Matheny, is the computing capacity and energy efficiency of high-performance computing facilities. He noted using current techniques and technologies, exascale computing would require a massive computing array that would consume "hundreds of megawatts of power." More efficient models are needed, and Matheny said the National Strategic Computing Initiative signed by President Barack Obama is a step in the right direction. Matheny also noted significant advances in machine learning, but said the technology will need to develop even further to realize the potential of big data analytics. The final challenge he highlighted was the issue of teasing causality out of big data. Current analytic models can only point out correlations, while determining causality remains beyond them. Matheny said if those capabilities are not developed, "then all of those exaflop-scale machines and all of those learning algorithms may be of limited value."


How to Make Better Visualizations
MIT News (11/05/15) Adam Conner-Simons

Researchers from the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory (CSAIL) and Harvard University conducted a study to ascertain which aspects of visualizations make them memorable, understandable, and informative. They say the results can provide improved design principles for communications in various industries, as well as reveal more insights about the functions of human memory, attention, and comprehension. "By integrating...eye tracking, text recall, and memory tests, we were able to develop what is, to our knowledge, the largest and most comprehensive user study to date on visualizations," says CSAIL Ph.D. student Zoya Bylinskii. The methods were applied while subjects looked at a sample of almost 400 distinct visualizations culled from various media; participants were first shown each visualization for 10 seconds, and then their recall was tested while the researchers collected data on eye fixations to determine which components of the visualizations were aiding with recall. According to the study, the strongest visualizations boast titles that employ concise, descriptive language to sum up the desired message. The research also suggests pictograms do not adversely impact memorability, and their appropriate usage can substantially improve information recall. Also observed is the need for repetition, and the study found strong visualizations offer multiple ways of conveying and labeling information.


New Software System Developed to Analyze Online Communication
USC News (11/05/15) Laura Paisely

University of Southern California (USC) researchers are working on the Text Analysis, Crawling, and Interpretation Tool (TACIT), a project that aims to make sophisticated and highly customizable text analysis available to social science researchers. The researchers say TACIT offers a suite of techniques that can be used for a variety of textual analyses. "We've created a very researcher-friendly environment where they can easily access and use these methods," says USC professor Morteza Dehghani. He notes the software has an open source, plugin architecture, which will help ensure its continued growth and adaptation amid rapidly changing technology and scientific needs. The architecture has three primary components that will help users find insights: a crawling plugin to allow for automated text collection from online sources and other content, a corpus management plugin to allow processing and storing of formal bodies of text, and analysis plugins to identify and classify text related to research topics. The USC team says TACIT should facilitate and encourage the use of advancements in computational linguistics in psychological research. "We wanted to bring state-of-the-art text analysis techniques to the social sciences and bridge the two fields of psychology and computer science," Dehghani says.


Nomadic Computing Speeds Up Big Data Analytics
National Science Foundation (11/04/15) Aaron Dubrow

University of Texas at Austin professor Inderjit Dhillon, a 2014 ACM Fellow, concentrates on expediting big data analytics by using machine learning to reduce data to its most insightful parameters. His latest research, supported by the U.S. National Science Foundation (NSF), is a non-locking, stochastic multi-machine algorithm for asynchronous and decentralized matrix completion (NOMAD). Dhillon says the algorithm can extract meaningful information from data much faster than other current cutting-edge instruments, as well as investigate data sets that break other leading software. Issues Dhillon and his collaborators are exploring with NOMAD include topic modeling, in which the system automatically ascertains the appropriate topics related to billions of documents, and recommender systems, in which the system can suggest appropriate items to purchase or people to meet. Dhillon says NOMAD operates on the principle of distributing computations over different machines using asynchronous communication. "The parameters go to different processors, but instead of synchronizing this computation followed by communication, the nomadic framework does its work whenever a variable is available at a particular processor," Dhillon notes. NSF program director Amy Apon says the NOMAD approach helps clear a way to run machine algorithms on massive-scale, distributed, commodity systems.


Google Tries to Make Machine Learning a Little More Human
Technology Review (11/05/15) Robert D. Hof

Google CEO Sundar Pichai says advances in machine learning would soon affect every product and service the company provides, and the company's researchers discussed some of these advances in a briefing at its headquarters last week. Research engineer Pete Warden used characters from the TV show "Star Trek: The Next Generation" to explain the company's progress, comparing Google's current machine-learning algorithms to the show's emotionless android, Data, and saying the company wants them to be more like the empathic Counselor Troi. For example, they want the company's computer-vision algorithms to be able to see a scene of a cooked turkey and plates taken in late November, and make the human leap that this must be a scene of a Thanksgiving dinner. Another researcher, Maya Gupta, is working on a project to help machine-learning algorithms process outlying data points that could trip them up. For example, a person might look at a sample of images of houses of various values and conclude that value relates to size, but an outlier of a small house in a very expensive market might cause a machine-learning algorithm to conclude value correlates to some other factor, such as color.


The Solution to Faster Computing? Sing to Your Data
University of Sheffield (11/03/15)

Sound could be the solution to faster computing, according to researchers from the universities of Sheffield and Leeds. Their research has shown that certain types of sound waves can move data quickly, using minimal power. Moreover, the direction of data flow depends on the pitch of the sound generated. The research marks the first time that surface acoustic waves--the same as the most destructive waves that can emanate from an earthquake--have been applied to a data storage system. The key advantage of using surface acoustic waves to move data bits down the wires of a solid-state drive is their ability to travel up to several centimeters without decaying, says Sheffield's Tom Hayward. "Because of this, we think a single sound wave could be used to 'sing' to large numbers of nanowires simultaneously, enabling us to move a lot of data using very little power," he notes. The team is now working to develop prototype devices.


Fast and Efficient Detection of Hand Poses Could Lead to Enhanced Human-Computer Interactions
A*STAR Research (11/04/15)

Researchers from the A*STAR Research Bioinformatics Institute say they have developed a program that can detect three-dimensional (3D) hand gestures in real time. The program breaks down the process of extracting correct 3D hand poses from a single image into two steps, simplifying the computational demands for a computer. The method first determines the general position of the hand and wrist, and then establishes the palm and individual finger poses using the anatomy of a human hand as a guide to narrow the computation options. The researchers say their approach is more efficient and faster than competing techniques. "We can estimate three-dimensional hand poses efficiently with processing speeds of more than 67 video frames per second and with an accuracy of around 20 millimeters for finger joints," says A*STAR's Li Cheng. He thinks the technology has the potential to improve human-computer interactions. "For example, instead of the touchscreen technologies used in current mobile phones, future mobile phones might allow a user to access desired apps by simply presenting unique hand poses in front of the phone, or by typing on a 'virtual' keyboard without having to access a physical keyboard," Cheng says. The researchers also note the technology could enable people to use hand signals to interact with and control mobile devices, laptops, or even robots.


Life, Enhanced: UW Professors Study Legal, Social Complexities of an Augmented Reality Future
University of Washington News and Information (11/03/15) Peter Kelley

University of Washington (UW) researchers working in the Tech Policy Lab recently released their first official study aimed at a policy audience. The report is based on a method of work designed by the Tech Policy Lab for evaluating new technologies, which involves first conferring with those in the computer science field to define augmented reality as precisely as possible. The researchers then examined the humanities and social sciences to consider the impact of the technology in question on various end-users. The researchers say these diversity panels help to ensure underrepresented groups are highlighted in a way that makes sense to those developing technology and its governing policies. "They also are important in that they increase the likelihood that the people who develop such policies get to hear and consider alternate points of view, concerns, and visions as they design and develop technology policies," says UW researcher Lassana Magassa. The researchers sorted issues related to augmented reality into two basic categories: those relating to the collection of information, and those relating to its display. The researchers arrived at a set of recommendations for policy makers that "do not purport to advance any particular vision, but rather provide guidance that can be used to inform the policymaking process." The recommendations include building dynamic systems, conducting threat modeling, coordinating with designers, consulting with diverse potential users, and acknowledging trade-offs.


Get Smart
UVA Today (11/02/15) Elizabeth Thiel Mather

Researchers at the University of Virginia (UVA) and the University of Michigan have developed ThermoCoach, a system they say could lead to the next generation of home thermostats. ThermoCoach uses sensors, such as motion and Bluetooth sensors, to monitor the occupancy patterns of the people in the home and then provide suggestions about optimal heating and cooling schedules based on the sensors' data. The suggestions come in the form of emails prompting users to make small adjustments to the thermostat for modest savings, or more drastic adjustments for larger savings. The key to ThermoCoach is the homeowner decides whether and how to act on the information. The researchers conducted a study involving 40 homes in the Charlottesville, VA, area. Homes using ThermoCoach were compared with homes in which people manually programmed their thermostats, and also homes in which thermostats were fully automated. The study found the ThermoCoach-equipped homes saved significantly more than manually programmed thermostats, and conserved, on average, 12-percent more energy than a fully automated thermostat. UVA professor Kamin Whitehouse says ThermoCoach combines ease of use with the element of human control, and participants in the initial study responded well to the technology. The researchers next will develop a longer study, which will cover multiple seasons and enable them to observe human reactions to ThermoCoach over a longer period.


Secure Wireless Key Distribution Verified Within a Real Outdoor Environment
Kazan Federal University (Russia) (10/30/15) Yevgeniya Litvinova

Researchers from Kazan Federal University in Russia say they have conducted the first experimental verification of secure wireless key distribution. The team observed random variations from carrier phase fluctuations in the received signal between two legitimate nodes with a common multipath channel placed into moving cars within a real outdoor environment. The researchers note the measurements of signal carrier phase could be more appropriate for cryptographic applications. They say the probability distribution of carrier phase is usually closer to uniform as opposed to the distribution of the received signal strength indication (RSSI). Moreover, the RSSI-based method is mainly limited to a fixed-link length communication scenario. In addition, the phase measurements have ambiguity, making key interception more complicated in practice. Kazan professor Arkadiy Karpov says the team achieved humble key generation rates in practice, but he believes the method has great potential for secure wireless key distribution between the base station and mobile subscriber in a cellular communications scenario.


Researchers Threatened a Robot With a Knife to See If Humans Cared
Motherboard (11/03/15) Emiko Jozuka

A recently published study has added to a growing body of evidence that human beings will readily empathize with human-like robots. Researchers from Toyohashi University of Technology in Japan showed 15 volunteers 56 different color photographs from a first-person perspective depicting either a human hand or a human-shaped robot hand in a variety of painful and non-painful situations. The photos included pictures of a knife cutting a human finger or the robot finger and pictures of the knife at a safe distance from the human or robot hand. The researchers monitored the volunteers' neurological reactions to these photos using electroencephalography devices, and they found the human volunteers had similar empathic neural responses to pictures of both the human hand and the robot hand being harmed. The researchers attributed this response to the fact that the robot hand resembled a human hand. "Humans can attribute humanity to robots and feel their pain because the basic shape of the robot hand in the present study was the same as that of the human hand," they say. The team plans to continue their research by determining if empathic responses will change when volunteers are presented with images of less human-like robot hands.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe