Association for Computing Machinery
Welcome to the February 29, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


The Promise of Artificial Intelligence Unfolds in Small Steps
The New York Times (02/28/16) Steve Lohr

The case of IBM's Watson computer, whose planned transition from a lab experiment to a commercial business proved tougher than anticipated, illustrates how artificial intelligence (AI) is more likely to progress in small increments than in leaps and bounds. "It is going to take decades for [AI] technology to really come to fruition," predicts the Massachusetts Institute of Technology's Erik Brynjolfsson. AI startups have received a great deal of venture capital in recent years, but computer scientist/entrepreneur Jerry Kaplan says, "expectations are way ahead of reality." History demonstrates that AI supporters' investments will eventually pay off, with veterans of Silicon Valley noting short-term expectations of new technology are often overestimated while longer-term potential is underestimated. AI milestones reached in the past few years include improvements in deep-learning technologies to enhance language translation, speech comprehension, and image recognition, but Enlitic CEO Jeremy Howard cautions innovation alone is not progress. "You have to take technology that works and apply it to a known problem," he says. IBM is attempting to transform Watson into an operating system for myriad AI applications, and it has been converted from a room-sized machine to cloud software. Its early commercialization initiative, which focused on healthcare, was complicated by researchers' lack of experience in dealing with untidy data and accounting for human decision-making.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Internet2 at 20: Alive and Kicking
Network World (02/29/16) Andy Patrizio

Nearly two decades since its launch, Internet2 continues to run on U.S. university campuses, connecting researchers and performing research and development operations. "In this country, there are lots of research institutions with lots of clever people that are by default utterly unaware of each other's work," notes Mimecast's Nathaniel Borenstein. "Internet2 has a way of bringing them together and making these services more widely available." Internet2 diversified into network-based services once it had brought better bandwidth to research universities, but people lost track of its purpose following this transition. Such services include the inCommon federated identity management service, which can connect people from multiple institutions. Internet2 also has been pushing into internal and external cloud services via efforts such as Internet2 Net+ services. University of Nebraska-Lincoln CIO Mark Askren says high-speed connectivity is still needed by universities, but server location is no longer important, as long as the connectivity is sustained. Internet2 has a collective 8.8 terabits of capacity on its network, with 100-gigabit links to participating universities via an OpenFlow/software-defined network. Internet2 CEO Dave Lambert says the network's full hosting on OpenFlow for two years has had a significant impact on catalyzing open networking foundations and open network creation.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


How Storytelling Makes Robots, AI More Human
InformationWeek (02/26/16) David Wagner

Researchers at the Georgia Institute of Technology's (Georgia Tech) School of Interactive Computing are attempting to imbue robots with more ethical behavior by education via storytelling. The Quixote system developed by Georgia Tech researchers Mark Riedl and Brent Harrison employs narrative for robot reinforcement training, and they report successively changing "socially negative" robot behavior under lab conditions. Riedl says the goal is to communicate procedural knowledge to the machine in natural language, and he notes "procedural knowledge...is often hard to write down. Most people can tell a story about it though." Quixote breaks up the reward or "treat" the robot would pursue in performing a task into smaller rewards as it follows the steps in a story, thus reinforcing proper behavior or "plot points." Riedl says the agent is first taught a series of stories, and "then the system builds an abstract model from the procedure of the story. And then it uses that abstract model as part of its reward system." Riedl and Harrison's overarching goal with Quixote is to help enable more human-like human-robot interaction, especially when it comes to programming robots to do a task.


EU Exascale Project to Create ARM-Based Prototype by End of the Year
V3.co.uk (02/26/16) Graeme Burton

The European Union-funded Exanest, Exanode, and Ecoscale supercomputing projects are collaborating on the framework for an exascale architecture prototype. The plan focuses on the ARM64+FPGA architecture as the foundation. The three partners have been assigned different roles: Exanest will focus on the system level, which will include interconnect, storage, and cooling; Exanode will work on the compute nodes and the memory in the nodes; and Ecoscale will concentrate on the reconfigurable logic in the system. The effort also will include cooling specialists and software developers to provide profiling and debugging tools. The goal is to create a fundamental "straw man" prototype by the end of the year. The Forth Institute of Computer Science says the core design will encompass "energy-efficient ARM cores, quiet and power-efficient liquid cooling, non-volatile memories integrated into the processor fabric, and the development of innovative, fast interconnects that avoid congestion." The researchers believe "there will be new interesting compute nodes coming out from our partner projects, and we will use such nodes," says Exanest project coordinator Manolis Katevenis. The system will rely on 64-bit microprocessor technology because of its relatively low power consumption and because it will enable more cores to be packaged together without generating excessive heat.


5 Tech Challenges Holding Back Robotics
Government Computer News (02/25/16) Mark Pomerleau

There are five technological hurdles to be overcome before governments widely adopt unmanned systems, according to IDC TechScape's Worldwide Government Robotics Technologies 2015 report. The first hurdle involves traffic management, and although the U.S. National Aeronautics and Space Administration (NASA) has developed an unmanned aerial vehicle (UAV) traffic management system that relies on geofencing technology to keep drones away from sensitive locations, no more capacity can be added to the current radar-based air traffic control system to monitor low-altitude flights. NASA is now developing a system using satellites and cellular fourth- and fifth-generation networks to track UAVs, but drones in such a system must be Internet-enabled to download information regarding weather, traffic, and restricted zones. The second technology obstacle is security, as videos from military UAVs already have been hacked by non-state groups. Bolstering the security of robots and communications protocols will be very important, while artificial intelligence (AI) is the third technological hurdle identified by the report. AI must be developed to help UAVs find the relevant signals from the vast amount of low-quality information they collect, and advances in task-level autonomy are required to move robotic technology beyond following simple, step-by-step commands. The fourth technological hurdle is dexterity, and extending battery life is the fifth technological hurdle, as it will help prolong the operation of UAVs, humanoid robots, and exoskeletons.


Building Living, Breathing Supercomputers
McGill Newsroom (02/26/16) Katherine Gombay

McGill University researchers says adenosine triphosphate (ATP), the substance that provides energy to human cells, also could be used to power the next generation of supercomputers. They created a model of a biological computer that can process information quickly and accurately using parallel networks in the same way electronic supercomputers do. However, they their biological supercomputer is much smarter than current supercomputers, uses much less energy, and uses proteins present in all living cells to function. Instead of utilizing electrons that are propelled by an electrical charge and move around within a traditional microchip, the biological supercomputer has short strings of proteins, called biological agents, which travel around the circuit in a controlled way, powered by ATP. Since the supercomputer is run by biological agents, it produces very little heat and uses far less energy than standard electronic supercomputers, making it more sustainable. The researchers say the biological supercomputer was able to efficiently solve a complex classical mathematical problem using parallel computing, but there is still a lot of work to be done to create a full-scale functional biological supercomputer. "Now that this model exists as a way of successfully dealing with a single problem, there are going to be many others who will follow up and try to push it further, using different biological agents, for example," says McGill professor Dan Nicolau.


Automatic Programming Makes Swarm Robots Safer and More Reliable
University of Sheffield (02/25/16)

An automated programming method previously used in manufacturing could make swarm robots more user-friendly and more reliable. Researchers at the University of Sheffield have used the supervisory control theory for the first time with a swarm of robots. The team reports the novel method reduces the need for human input and therefore error. The group used a graphical tool to define tasks for up to 600 robots, and then a machine automatically programmed and translated this to the robots. The program uses a form of linguistics, comparable to using the alphabet in the English language, and the robots use their own alphabet to construct words related to what they perceive and actions they choose to perform. The researchers say the supervisory control theory helps the robots to choose only those actions that eventually result in valid "words," guaranteeing their behavior meets the specification. They note this approach would reduce many of the bugs that occur in programming. The researchers next plan to focus on finding ways in which humans can collaborate with swarms of robots so the communication is two-way and they can learn from each other.


Research Meets Light Art in This Microsoft Installation
PSFK (02/22/16) Eva Recinos

Microsoft researchers have developed "A Panorama of the Skies," an installation at Microsoft's headquarters that uses the RoomAlive Toolkit--an open source software development kit that helps users calibrate video projectors and Kinect sensors--to transform a conference room into a space full of audiovisual elements to stimulate the senses. The installation uses five projectors and eight Kinect cameras to "acquire a detailed map of the room, register the projectors and cameras into a single coordinates system, and enable real-time projection mapping in the immersive scene," says Microsoft's Maja Petric, whose research focuses on using light in art projects. In creating the installation, Petric and Microsoft human-computer interaction researcher Hrvoje Benko wanted to create a space that could be "experienced emotionally," showing natural scenes such as the sun rising and lightning striking. University of Washington researcher Daniel Peterson contributed soundscapes, which merge natural sounds and three-dimensional audio techniques to create a multi-layered experience.


Penn Study: Machine Learning at Arraignments Can Cut Repeat Domestic Violence
Penn News (02/24/16) Michele Berger

A large metropolitan area using machine learning to help make arraignment decisions has seen its number of new domestic violence incidents decline by half, according to University of Pennsylvania (UPenn) researchers. Post-arraignment arrests annually are down by more than 1,000, report UPenn professors Richard Berk and Susan B. Sorenson. Berk says arraignment decisions are kind of "seat of the pants" and the bar is very low, and a computer can "learn" from training data which kinds of individuals are likely to re-offend. To understand how machine learning could help in domestic violence cases, Berk and Sorenson obtained data from more than 28,000 domestic violence arraignments between January 2007 and October 2011. They also studied a two-year follow-up period after release that ended in October 2013. For the research, the 35 initial inputs included age, gender, prior warrants and sentences, and residential location. The researchers helped the program understand appropriate associations for projected risk. The data points offered extra information to a court official deciding whether to release an offender. "In all kinds of settings, having the computer figure this out is better than having us figure it out," Berk notes.


Stanford Researchers Use Dark of Night and Machine Learning to Shed Light on Global Poverty
Stanford Report (02/24/16) Glen Martin

A Stanford University team says it has developed a technique for mapping poverty that could be used to provide agencies with reliable information about the infrastructure and services of distressed areas. The technique is based on millions of high-resolution satellite images of likely poverty zones. For the project, evening lights served as a proxy for economic activity by revealing the presence of electricity and the creature comforts it represents. The team used machine learning to analyze daytime and nighttime satellite imagery and asked the system to make predictions on poverty, says Stanford professor Stefano Ermon. The model developed has more than 50 million tunable, data-learned parameters, and the team believes it could replace expensive and time-consuming ground surveys. The researchers say the approach would be inexpensive, scalable, and accurate. "And the beauty with developing and working with these huge datasets is that the models should do a better and better job as they accumulate more and more information," says Stanford professor Marshall Burke. However, he notes more imagery is needed to give the system the data necessary to take the next step and predict whether or not specific locations are moving toward prosperity.


Georgia Tech Discovers How Mobile Ads Leak Personal Data
Georgia Tech News Center (02/22/16) Tara La Bouff

Researchers at the Georgia Institute of Technology's School of Computer Science found in-app mobile advertising can leak potentially compromising information about smartphone users between ad networks and app developers. The researchers studied more than 200 participants who used a customized app for Android-based smartphones, and reviewed the accuracy of personalized ads that were served to test subjects from the Google AdNetwork based upon their personal interests and demographic profiles. They also studied how much user information a mobile app creator could reveal due to the personalized ads served to them. In-app ads are displayed without encryption as part of the app's graphical user interface, which means mobile app developers can access the targeted ad content delivered to its own app users and then reverse-engineer that data to build a profile of their app customer. In addition, personalized ad content is not isolated from the app developer. "Apps--especially malicious apps--can be used to collect potentially sensitive information about someone simply by hosting ads in the app and observing what is received by a user," notes lead researcher Wei Meng. "Mobile, personalized in-app ads absolutely present a new privacy threat."


Computers Can Tell If You're Bored, Shows New BSMS Study
University of Sussex (United Kingdom) (02/23/16)

University of Sussex researchers have found computers can read a user's body language to tell whether they are bored or interested in what is on the screen. By measuring a person's movements as they use a computer, it is possible to judge their level of interest by monitoring whether they display non-instrumental movements, according to the researchers. If someone is focused on the screen, known as "rapt engagement," there is a decrease in these involuntary movements. In the study, 27 participants faced three-minute stimuli differing in how interesting they were on a computer while using a handheld trackball to minimize instrumental movements. The participants' movements were quantified over the three minutes using video motion tracking. In two comparable reading tasks, the more engaging reading resulted in a 42-percent reduction of non-instrumental movement. "Being able to 'read' a person's interest in a computer program could bring real benefits to future digital learning, making it a much more two-way process," says University of Sussex researcher Harry Witchel. He thinks the research could have a significant impact on the development of artificial intelligence.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe