Association for Computing Machinery
Welcome to the February 10, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Navy Calls on Researchers to Create Firefighting Humanoid Robot
Computerworld (02/09/16) Sharon Gaudin

The U.S. Department of Defense's Office of Naval Research awarded a $600,000 grant to Worcester Polytechnic Institute (WPI) professor Dmitry Berenson to develop motion-planning algorithms for firefighting humanoid robots. Researchers from the University of Pennsylvania and Carnegie Mellon University also are participating in the Shipboard Autonomous Firefighting Robot project. The researchers expect to collaborate with the U.S. Navy to test the robot. "By using autonomous humanoids, we're hoping to reduce the need for Navy personnel who have to perform a whole host of tasks and to also help mitigate the risks to people in fire-suppression scenarios," Berenson says. He notes when aboard a naval vessel, the robots must be able to move agilely in cramped quarters, as well as remain upright when on ships rocking in rough waters. He says WPI's expertise with motion-planning software will help in the construction of the algorithms that such machines would need to function aboard naval ships. "We could contribute our unique experience with motion planning for humanoid robots, which must perform in complicated scenarios," Berenson says. "Our focus on motion planning for autonomous robots, and not just those that are controlled by tele-operation, also helped us secure the grant."


3D-Printed Display Lets Blind People Explore Images by Touch
New Scientist (02/08/16) Paul Marks

Researchers at the Hasso Plattner Institute (HPI) in Germany have developed Linespace, a display that blind and partially sighted people could use to interact with visual information such as maps, photos, and designs. Linespace consists of the head from a three-dimensional printer attached to a drafting table similar to those used by designers and architects, and attached to arms and motors that enable it to move quickly over the table. The equipment is activated using a pedal and controlled with speech and gestures, enabling users to verbally call up and print images in the form of raised plastic lines. Printed images can then be explored by fingertip. "The objective is to let blind users visualize and make sense of complex spatial data just like sighted people," says HPI research team leader Patrick Baudisch. Volunteers gave Linespace high marks after tests, and they say it could be used in education, for making maps and artwork accessible, and for gaming and sharing information. The team will present Linespace at ACM CHI 2016 in San Jose, CA, in May.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Microsoft Researchers Smash Homomorphic Encryption Speed Barrier
The Register (UK) (02/09/16) Iain Thomson

Microsoft researchers have developed CryptoNets, a method to significantly increase the speed of homomorphic encryption systems, which enable software to analyze and modify encrypted data without decrypting it into plain text first. The CryptoNet-based optical-recognition system can make 51,000 predictions an hour with 99-percent accuracy while studying a stream of encrypted input images, according to Microsoft research manager Kristin Lauter. She says the technology could be used in the cloud to process encrypted data without needing the decryption keys. The new approach relies on pre-processing work, as the researchers need to know in advance the complexity of the math that is to be applied to the data. In addition, they need to structure the neural network appropriately and keep data loads small enough so the computer handling them is not overworked. To achieve these goals, the researchers developed the Simple Encrypted Arithmetic Library. During testing, the researchers used 28-pixel by 28-pixel images of handwritten words taken from the Mixed National Institute of Standards and Technology database and ran 50,000 samples through the network to train the system. "From a research point of view, we are definitely going towards making it available to customers and the community," Lauter says.


New Software Can Actually Edit Actors' Facial Expressions
Smithsonian.com (02/08/16) Emily Matchar

FaceDirector software from Disney Research and the University of Surrey may reduce the number of takes required in filming because it blends images from multiple takes, making it possible to edit accurate emotions onto actors' faces. The project's major challenge has been determining how to synchronize different takes, and FaceDirector analyzes facial expressions and audio cues. Facial expressions are tracked by mapping facial features, and the software then determines which frames can be fit into each other, like the pieces of a puzzle. Each piece has multiple mates, so a director or editor can then select the best combination to produce the desired facial expression. Experimental content was created by bringing in a group of students from the Zurich University of the Arts, who acted several takes of dialogue, each time performing facial expressions. The team used the software to generate multiple combinations of facial expressions that communicated subtler emotions, and they mixed several takes together to create rising and falling emotions. The researchers say FaceDirector currently works best using scenes filmed against a static, unmoving background.


Search Engines Will Know What You Want...Sooner
Cornell Chronicle (02/09/16) Bill Steele

Cornell University researchers have refined the algorithm used in search engines to make them faster and more interactive, responding to the user's interests in real time. The researchers say the new method surmounts a decade-old performance barrier, and the new techniques could be applied to social media and private and commercial databases as well as to Web searches and recommendation systems. Conventional search engines already use ranking algorithms that use edge weights, but they are too slow, so the researchers accelerated this process by reducing the graph. The algorithm looks for nodes that are correlated, meaning they represent similar interests, with strong connections between them. The researchers tested the method on a database of scholarly publications and on a blog search system, and found it worked five orders of magnitude faster than currently used methods. The researchers learned their reduced model accelerated "learn to rank" systems where the computer notes which items in a list the user clicks on to learn the user's preferences. The researchers say the results could be made even timlier by doing the calculations on the client side, after downloading the reduced model to the client's computer.


Study: Nobody Wants Social Robots That Look Like Humans Because They Threaten Our Identity
IEEE Spectrum (02/08/16) Evan Ackerman

The adoption of social robots into people's lives is complicated by humans' tendency to view human-like robots as a threat to their identity, according to a recent report in the International Journal of Social Robotics. The researchers say surveys conducted over the last several years found, despite a positive perception of robots overall, a significant opposition to anthropomorphic machines performing tasks such as teaching children and caring for seniors. "We expect that for humans, the thought that androids would become part of our everyday life should be perceived as a threat to human identity because this should be perceived as undermining the distinction between humans and mechanical agents," the researchers note. They tested this theory by showing a group of people a series of pictures of non-human robots, humanoid robots, and human-looking androids, while asking them about perceived potential damage of the robot to human essence and identity, along with how much agency they saw in the robot. Participants showed little love for the androids, and the researchers observed, "the more the robot's appearance resembles that of a real person, the more the boundaries between humans and machines are perceived to be blurred." A related concern is the worry social robots may be resisted because they are better at performing tasks than humans are, no matter how much they resemble people physically.


Spatial Technology Opens a Window Into History
USC News (02/09/16) Lizzie Hedrick

University of Southern California (USC) researchers have developed Strabo, software that reads scanned maps and automatically identifies historical locations. However, the researchers say Strabo is just a first step toward a larger goal. "The software I ultimately want to create will be a user-generated platform offering information about any given piece of land from multiple sources," says USC professor Yao-Yi Chiang. He says the program eventually will support automatic reasoning, which will provide information to users with only the coordinates of a location and how the land will be used. In 2014, a British company hired Chiang to help them determine whether areas of land had been subject to previous contamination. Chiang updated Strabo to automatically scan historical ordinance survey maps covering the entire country. The program now integrates multiple maps to identify areas that once supported factories, mines, quarries, or gas works that no longer exist. The researchers note the platform is entirely open source. "Anyone can use our software without paying a licensing fee," Chiang says. "Users can write their own code on top of our software--in fact, we encourage people to do that."


SweepSense Pauses Your Music When Earphones Are Removed
R&D Magazine (02/08/16) Greg Watry

Carnegie Mellon University researchers have developed SweepSense, a system that can pause music when a user removes both of their earbuds. SweepSense utilizes the speaker and microphone hardware already in place in laptops and smartphones. The technology emits an array of inaudible ultrasounds sweeps, which can be picked up by the speaker system, enabling SweepSense to detect changes in the environment, such as earbuds being plucked out of ears. During testing, the researchers recruited 24 participants and pumped 20 to 22 kHz through the left ear bud and 23 to 25 kHz through the right ear bud. The microphone, which was 20 cm down the cord, captured the reflected ultrasound. Using this setup, the researchers were able to detect four possible states--both buds in, left bud out, right bud out, and both buds out. In 1,200 classification attempts, SweepSense achieved an accuracy of 94.8 percent. An additional experiment showed SweepSense was able to detect the various orientation states of a laptop lid. The researchers plan to improve the hardware in order to increase the frequency range of SweepSense to make the ultrasound emitted more inaudible to the human ear.


Cockroach-Like Robots May Be the Future of Disaster Help
Associated Press (02/08/16) Seth Borenstein

Researchers at the University of California, Berkeley and Harvard University have developed the Compressible Robot with Articulated Mechanisms (CRAM), a mini-robot that can mimic the cockroach's remarkable strength and agility. The researchers say swarms of future roach-like robots could be equipped with cameras, microphones, and other sensors and then used in disasters to help search for victims by squeezing through small cracks. CRAM actually looks more like an armadillo and is about 20 times bigger than actual cockroaches. The researchers built the prototype using off-the-shelf electronics and motors, which cost less than $100, according to estimates. If mass-produced, with sensors and other equipment added on, the robots could eventually cost less than $10 apiece, according to Harvard researcher Kaushik Jayaram. The researchers found cockroaches use a newly identified type of locomotion, based on the ideal amount of belly friction, to squeeze through cracks and crevices. Cockroaches and insects in general are excellent design guides for roboticists to borrow from, according to Johns Hopkins University professor Noah Cowan, who was not involved in the CRAM research. "There's definitely a case for machines that can go into environments that are not safe for humans to go into," Cowan says.


Algorithm Developed to Predict Future Botnet Attacks
Network World (02/08/16) Patrick Nelson

Researchers from Ben-Gurion University (BGU) in Israel have developed an algorithm that can trace botnets back to their perpetrators. The key is analyzing data produced by previous attacks, according to cybersecurity researchers. The algorithm first identifies the botnet and then allows it to be traced. "Using botnets, hackers and cybercriminals can carry out powerful attacks that, until now, were largely untraceable," the researchers say. The team used an analysis of honeypot data collected by Deutsche Telecom and machine learning to build a program that identifies botnets by finding similar attack patterns. The researchers have discovered six botnets and traced them back to their administrators using the new algorithm. In addition, they believe the algorithm can determine if a botnet attack came from a person or from a robot. The researchers also say they can use the algorithm to predict future botnet attacks. "This is the first time such a comprehensive study has been carried out and returned with unique findings," reports Dudu Mimran with Deutsche Telekom Innovation Labs at BGU.


Google AI Gets Better at 'Seeing' the World by Learning What to Focus On
TechRepublic (02/05/16) Nick Heath

Google Deepmind researchers say they have achieved state-of-the-art performance in picking out house numbers from a database of more than 200,000 images taken from Google Street View. The researchers achieved their results by refining the convolutional neural network (CNN) used to match images. They added a module to their CNN that learns how to manipulate images to make people, animals, and objects easier to identify by removing distortion, noise, and clutter, and enlarging and rotating areas of interest. The spatial transformer module removes much of the extraneous detail that could slow down identification. In one test, the researchers applied the module to a CNN tasked with recognizing images of traffic signs, and found it learned to focus on the sign and gradually remove the background. In another task, the spatial transformer learned how to identify and single out heads and bodies of birds in a collection of images. By adding a spatial transformer to a CNN, Google was able to reduce error rates from 3.9 percent to 3.6 percent. When identifying which of 200 different species of bird were pictured in thousands of images, the system achieved an accuracy rate of 84.1 percent, a 1.8-percent improvement over the previous best result.


Evolving Our Way to Artificial Intelligence
The Conversation (02/05/16) Arend Hintze

Michigan State University professor Arend Hintze cites the recent achievement of a computer program beating a top-level Go player as an example of the limitations of current artificial-intelligence (AI) research. He argues a new, evolutionary approach to AI development is needed to realize true thinking machines. Hintze stresses AI's progress is measured "not by how much we learned about nature or humans, but by achieving a well-defined goal," such as beating a person at Go. "I describe the AI that I would like to have as 'a machine that has cognitive abilities comparable to that of a human,'" Hintze says. Because it is impossible to engineer the indefinable, the current strategy of engineering human-level cognition is untenable, and Hintze says for this reason he focuses on evolving AI by attempting to understand how natural intelligence evolved. He thinks an approach with much more potential is teaching AI to play games in a more human-like manner through the art of improvisation. "If the AI that controls other players evolved, it may go through the same steps that made our brain work," Hintze says. "That could include sensing emotional equivalents to fear, warning about undetermined threats, and probably also empathy to understand other organisms and their needs."


Jeremy Bailenson Peers Into the Future of Virtual Reality
The Wall Street Journal (02/10/16) Geoffrey A. Fowler

In an interview, Jeremy Bailenson, director of Stanford University's Virtual Human Interaction Lab, discusses the future of virtual-reality (VR) technology. "My dream has been to build a system that [lets you] feel present," he says. Bailenson envisions avatar systems that can reproduce what he calls the "virtual handshake" in five years. He thinks this will help enable the realization of virtual presence, making the requirement of being physically present at events less necessary. Bailenson also speculates on future VR innovations such as instruments that can replicate the experience of being someone of a different gender or ethnicity. In such scenarios, feelings of discrimination not previously felt by the user can be encountered to the point where they might change attitudes and behavior. According to longitudinal studies Bailenson conducted, people who have experiences in VR find those experiences have a more lasting effect on them than watching a video. As an example, Bailenson cites a study in which people experienced tree-cutting in a VR environment to become more conservation-minded, and their desire to use less paper was stronger one month later than before they viewed a video.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe