Association for Computing Machinery
Welcome to the May 27, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Please note: In observance of the Memorial Day holiday, TechNews will not be published on Monday, May 30. Publication will resume Wednesday, June 1.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).


Artificial Intelligence Is Far From Matching Humans, Panel Says
The New York Times (05/25/16) John Markoff

A panel of legal and technology experts on Tuesday discussed artificial intelligence (AI) systems at an event hosted by the White House Office of Science and Technology Policy (OSTP). The panel concluded research has far to go before such systems can equal the human mind's flexibility and learning capability. "The AI community keeps climbing one mountain after another, and as it gets to the top of each mountain, it sees ahead still more mountains," says OSTP's Ed Felten. After 25 AI researchers convened seven years ago, the group, sponsored by the Association for the Advancement of Artificial Intelligence, said there was no imminent danger of autonomous weapons or advanced economies where workers have been replaced by machines. Some researchers at Tuesday's event cited the public perception of AI as an existential threat largely promulgated by the movie industry. "It's pretty much always the case in science fiction that AI is this monolithic entity that is scheming to take over," says Allen Institute for Artificial Intelligence CEO Oren Etzioni. Microsoft researcher Kate Crawford says the industry should emphasize ethics to engineers in training to lower the chances of increasingly pervasive AI systems being used for unintended purposes.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 

ORNL Researchers Create Framework for Easier, Effective FPGA Programming
HPC Wire (05/24/16) John Russell

Oak Ridge National Laboratory (ORNL) researchers this week detailed their work on a high-level programming framework designed to boost the ease and efficacy of coding for field-programmable gate arrays (FPGAs) at the International Parallel & Distributed Processing Symposium (IPDPS) in Chicago. "We implemented this prototype system using our open source OpenARC compiler; it performs source-to-source translation and optimization of the input OpenACC program into an OpenCL code, which is further compiled into a FPGA program by the backend Altera Offline OpenCL compiler," the researchers note. Substantially ameliorating FPGAs' persistent performance and portability problems could create application opportunities in different computing environments. The ORNL researchers note future exascale systems must meet multiple criteria, including reliability, energy efficiency, and high performance on mission applications. They report the initial testing of their framework "helps support the goal of using a directive-based, high-level programming strategy for performance portability across heterogeneous [high-performance computing] architectures." The researchers say the most significant aspects of their work include the first implementation of a standard and portable directive-based, high-level programming system for FPGAs; the proposal of FPGA-specific OpenACC compiler optimizations and novel pragma extensions to enhance performance; and empirical assessment of the OpenACC-to-FPGA compiler framework using eight OpenACC benchmark applications.

DARPA Wants to Find the Vital Limitations of Machine Learning
Network World (05/26/16) Michael Cooney

The U.S. Defense Advanced Research Projects Agency's (DARPA) Fundamental Limits of Learning (Fun LoL) program is seeking the basic limitations of machine-learning systems so the quest for the ultimate learning machine can be quantified and tracked systematically. "The process of advancing machine learning could no doubt go more efficiently--but how much so?" DARPA asks. "To date, very little is known about the limits of what could be achieved for a given learning problem or even how such limits might be determined." DARPA is using the Fun LoL program to search for insights into mathematical frameworks, architectures, and methods that would help resolve numerous issues. Such issues include the number of examples needed to train a system to achieve a given accuracy performance, the efficiency of a given learning algorithm for a given problem, and the potential benefits due to the statistical structure of the model producing the data. "What's a fundamental theoretical framework for understanding the relationships among data, tasks, resources, and measures of performance--elements that would allow us to more efficiently teach tasks to machines and allow them to generalize their existing knowledge to new situations," says DARPA's Reza Ghanadan.

Meet Terrapattern, Google Earth's Missing Search Engine
The New Yorker (05/25/16) Nicola Twilley

The Terrapattern project initiated by Carnegie Mellon University professor Golan Levin has developed the first open-access search tool for satellite imagery. The tool enables users to perform a search for objects and receive a map and global-positioning system coordinates. The team behind Terrapattern hopes the tool will help make such information more available to the general public. Terrapattern incorporates a deep convolutional neural network initially pre-trained on the ImageNet database, which was highly prone to error. By training the network on the more refined OpenStreetMap database, the Terrapattern artificial intelligence (AI) was able to better read satellite imagery in a matter of days. The AI splits the imagery into tiles and decomposes each tile into information about shape, color, contrast, and texture, then reassembles it into meaning via layers of probability and comparison. The research team has uploaded its model to an open source "model zoo," making it the first AI trained on satellite imagery that is freely available to use and modify. "Our budget, in terms of the computing power we can afford, makes about 2,500 square miles of the American landscape searchable," Levin says. He hopes the invention will help ease research into land-use issues by activists and citizen scientists.

Shoot an Atom Into Silicon, and You May Have the Beginnings of a Quantum Computer
IDG News Service (05/26/16) Katherine Noyes

Researchers at Sandia National Laboratories announced they have had promising results by shooting a single antimony atom into a silicon substrate with an ion beam generator as a possible first step toward the creation of a practical quantum computer. The five-electron antimony atom has one more electron than the silicon atom, which means a single antimony electron remains free. The researchers applied pressure to that electron via an electromagnetic field and observed its spin. They believe they can shoot a second donor atom at just the right distance to establish communication between the two atoms, which would form the beginning of a quantum computing circuit. "Our method is promising because, since it reads the electron's spin rather than its electrical charge, its information is not swallowed by background static and instead remains coherent for a relatively long time,' says Sandia postdoctoral fellow Meenakshi Singh. The use of silicon is another benefit, as commercial manufacturing technologies for silicon already are well established, and the substance is less expensive than specialized superconducting materials. The experiment represents the first time the various processes have been coordinated on a single chip and with each quantum bit precisely positioned.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 

Tor Project Works on Boosting Encryption for Next Release
ZDNet (05/26/16) Charlie Osborne

The nonprofit Tor Project aims to boost the security of the Tor Onion Router and network by introducing a distributed random number generator. The organization on Wednesday revealed Tor researchers and developers met as part of a hackfest in Montreal to share ideas on how to bring Tor to a new level of personal security. The Tor network relies on nodes and relays to disguise traffic flowing in and out, concealing original Internet Protocol addresses and making surveillance more difficult to accomplish. However, no system is 100-percent secure, so Tor must try to stay ahead of attackers and boost privacy at every opportunity. The Montreal hackfest aimed to facilitate the development of next-generation security features. The researchers focused on a distributed random number generator, which connects different PCs to generate a single random number that cannot be predicted through analytics. The numbers serve as the basis for encryption key generation, and this level of unpredictability will enhance user privacy within the Tor network, according to the researchers. Going forward, the researchers want to introduce 55-character-long onion addresses for websites, instead of the conventional 16-character-long limits.

Carnegie Mellon Transparency Reports Make AI Decision-Making Accountable
CMU News (05/25/16) Byron Spice

Researchers at Carnegie Mellon University (CMU) led by professor Anupam Datta have developed measurement techniques to extract insights into decision-making machine-learning algorithms. "Demands for algorithmic transparency are increasing as the use of algorithmic decision-making systems grows and as people realize the potential of these systems to introduce or perpetuate racial or sex discrimination or other social harms," Datta notes. He says CMU's Quantitative Input Influence (QII) measures can provide the relative weight of variables that may have influenced an algorithm's ultimate decision. Producing the QII measures necessitates machine-learning system access, but it does not require analyzing the code or other inner workings of the system, according to Datta. He says it also needs knowledge of the input dataset initially used to train the system. QII measures can explain decisions of a large category of existing machine-learning systems, and they account for correlated inputs when quantifying influence. Moreover, the measures analyze the joint influence of a series of inputs on outcomes and the marginal influence of each input within the series. Tests against standard machine-learning algorithms for training decision-making systems on real datasets found the QII offered better explanations for various scenarios, including sample applications for predictive policing and income prediction.

The Pipes Powering the Internet Are Nearly Full--What Do We Do?
New Scientist (05/25/16) Timothy Revell

The optical fibers that transmit data throughout the Internet have almost reached their capacity limits, and nothing less than a revolutionary upgrade is needed to surpass them, according to experts. Video remains the biggest consumer of Internet capacity, while 50 billion smart devices--the Internet of Things--are expected to be online by 2020, according to tech companies. The end of the decade also is when some scientists say conventional optical fiber will hit a wall in terms of data capacity. The University of Glasgow's Martin Lavery has proposed an unusual scheme in which the laser beams used to carry data through the fiber would be fired through a spiral, and twisted together so they can be transmitted as one. Each beam would relay its own signal that can be separated and read at the other end. "We can potentially have an infinite number," Lavery suggests. "The only restriction is the size of the optical fiber." A different proposal from the University of Southampton's Walter Belardi considers hollow-core fibers filled with air, enabling up to 45-percent faster light transmission, while lower-quality glass cladding might further upgrade fiber performance. Belardi says the fibers' faster signal speeds and reduced costs could compensate for their greater signal loss.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 

Self-Driving Truck Acts Like an Animal
Chalmers University of Technology (05/25/16)

The traditional way of developing vehicles is to base progress on earlier models and gradually add new functions, but this technique may not work when developing future autonomous vehicles, according to Chalmers University of Technology researcher Ola Benderius. He leads researchers in developing a self-diving truck as part of the Grand Cooperative Driving Challenge, a European Union project and competition in which 10 to 15 universities compete against each other with autonomous vehicles. The Chalmers team views the self-driving vehicle as more like a biological organism than a technical system. "A biological system absorbs information from its surroundings via its senses and reacts directly and safely," Benderius says. He says all of the information the truck compiles from sensors and cameras is converted into a format resembling the way in which animals interpret the world via their senses, enabling the truck to react to unexpected situations. In order to achieve this goal, the researchers are developing small and general behavior blocks that make the truck react to various stimuli. The truck is programmed to constantly keep all stimuli within reasonable levels. The researchers say it can continuously learn to do this as efficiently as possible, making the framework extremely flexible and good at managing sudden and new dangers. The truck's software, called OpenDLV, is being developed as open source code.

Women From Venus, Men Still From Mars on Facebook, Study Finds
The New York Times (05/25/16) Christopher Mele

Stony Brook University researchers have found women use warmer, gentler words in their status updates on Facebook compared to men, who are more likely to swear, express anger, and use argumentative language. The study, which examined 10 million Facebook postings from more than 65,000 Facebook users, also found women use slightly more assertive language. The shift in assertiveness could reflect the cultural and societal changes brought about by a generation that heavily uses social media, says University of Melbourne researcher Margaret L. Kern, who participated in the study. In addition, women's writing largely reflected compassion and politeness compared with men, who were hostile and impersonal, according to the researchers. They also found women are more likely to discuss family and social life, and rely more on words that describe positive emotions. Meanwhile, men more frequently discuss topics related to money or work, and favored words associated with politics, sports, competition, and activities. "The differences were interpreted as reflecting a male tendency toward objects and impersonal topics and a female tendency toward psychological and social processes," the researchers say.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 

The Algorithm That Can Predict When a Tsunami Will Strike (05/25/16) Victoria Woollaston

Australian National University (ANU) researchers have developed the Time Reverse Imaging Method, an algorithm that can re-create the movements of a typical tsunami to determine its threat level. The system takes real-time data from ocean sensors and uses the information to re-create what the tsunami looked like before it formed. Existing tsunami-warning systems rely on region-specific scenarios based on previous patterns in that area, but scientists cannot make accurate projections of how much water will hit a coast, and how hard. The researchers developed the algorithm by focusing on data from the Tohoku-Oki earthquake and tsunami from March 11, 2011. They used the data to calculate what the tsunami looked like when it first began, and then added sensor data from the Pacific Ocean floor and projected what the tsunami would look like when it made landfall. The researchers checked the results against what actually happened in 2011 to hone the algorithm. They want to test the method on other recorded earthquakes and tsunamis to fine-tune the technology until it is ready for implementation, which could take up to five years. "This research can be part of the next generation of tsunami-warning systems that are based on real-time information," says ANU researcher Jan Dettmer.

Using Cellphone Data to Study the Spread of Cholera
Swiss Federal Institute of Technology in Lausanne (05/24/16) Jan Overney

Researchers at the Swiss Federal Institute of Technology in Lausanne (EPFL) recently led a study showing how human mobility patterns contributed to the spread of a cholera epidemic in Senegal in 2005. "One goal of our research was to develop ways to estimate how the disease spread across populations, both in space and in time," says EPFL researcher Flavio Finger. Human mobility patterns previously had to be reconstructed from patient case data, a flawed and time-consuming process. The EPFL researchers used mobile phone data to re-run the Senegalese cholera outbreak. "Our simulation did a great job at reproducing the peak of reported cases of cholera in the region around Touba, where the epidemic broke out during the pilgrimage." Finger says. The simulation also correctly mapped the spread of the disease across the country as pilgrims traveled home, and factored in local events such as intense rainfall in the country's capital of Dakar. "Having access to more accurate data on population movement simplified our work and eliminated much of the remaining uncertainty," Finger says. The researchers found improving access to sanitation and providing clean drinking water could have considerably reduced the number of new cases of cholera during the pilgrimage.

New Technique Controls Autonomous Vehicles in Extreme Conditions
Georgia Tech News Center (05/23/16) Rick Robinson

Georgia Institute of Technology (Georgia Tech) researchers say they have developed a new method for keeping a driverless vehicle under control as it maneuvers at the edge of its handling limits. Model predictive path integral control (MPPI) was developed to address the non-linear dynamics involved in controlling a vehicle near its friction limits. MPPI uses algorithms and on-board computing, in conjunction with installed sensing devices, to increase vehicular stability while maintaining performance. The researchers created the algorithm using a stochastic trajectory-optimization capability, and employed statistical methods to integrate large amounts of handling-related information and data on the dynamics of the vehicular system to compute the most stable trajectories from a range of possibilities. The algorithm continuously samples data coming from global-positioning system (GPS) hardware, inertial-motion sensors, and other sensors. Meanwhile, the on-board hardware-software system performs real-time analysis of a vast number of possible trajectories and relays optimal handling decisions to the vehicle. Each vehicle also carries a motherboard with a quad-core processor, a graphics processing unit (GPU), and a battery. The GPU enables the MPPI algorithm to sample more than 2,500, 2.5-second-long trajectories in under a 60th of a second.

Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.

To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe