Welcome to the March 2, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.
Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
Cryptography Pioneers Win Turing Award
The New York Times (03/01/16) John Markoff
Researchers Whitfield Diffie and Martin E. Hellman have been named to receive ACM's 2015 A.M. Turing Award for their pioneering work in cryptography at Stanford University in the 1970s. Diffie was inspired by Stanford artificial intelligence researcher John McCarthy's vision of a "Home Information Terminal" to consider the challenge of individual digital signatures, and along with Hellman created "public-key cryptography." Without this technique, the commercialization of the World Wide Web likely would never have happened. The timing of Diffie and Hellman's honor is particularly significant because the U.S. Federal Bureau of Investigation is fighting Apple over its unwillingness to unlock the cryptographic system that protects digital information stored in its iPhones. The privacy protection technology used to safeguard modern electronic communications is derived from Diffie and Hellman's public-key cryptography research. The Turing Award, which is considered the Nobel Prize of the computing world, includes a $1-million cash award. Diffie, who supports personal privacy protection in the digital age, says he will use the Turing Award prize money to further document the history of cryptography. Hellman, who has focused on the threat presented by nuclear weapons, says he will write on a new book with his wife on peace and sustainability.
Google's Artificial Brain Is Pumping Out Trippy--and Pricey--Art
Wired (02/29/16) Cade Metz
Artworks created by artificial neural networks developed by Google made their public debut last week at a San Francisco gallery, where they attracted high prices. Google and other online services currently use neural networks for a variety of functions such as automated image identification, speech recognition, and language translation, but Google's DeepDream art "generator" represents an entirely new method the company calls "Inceptionism." DeepDream is fed an image, and the neural net probes it for familiar patterns, enhances them, and then repeats the process for the same image. "This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird," Google says in a blog post. "This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere." Artists can select the images fed to the neural nets and adjust the nets' behavior, possibly retraining them to identify new patterns. The technique began as a way of better understanding how neural networks behave, and Google engineer Mike Tyka saw it as a way of creating art.
IARPA Wants Smarter Algorithms--Not More of Them
NextGov.com (03/01/16) Mohana Ravindranath
In an interview, U.S. Intelligence Advanced Research Projects Activity (IARPA) program manager Jacob Vogelstein discusses the possibility of his agency saving money by refining existing algorithms that have been previously rejected for deep research using more rigorous training. He notes many state-of-the-art systems need an interactive simulation environment to do more interesting artificial intelligence (AI) work. "With...continuous interactive feedback, you're going to have a machine-learning approach to control a robot, as opposed to programming it with a set of rules," Vogelstein notes. He says IARPA has developed a large number of algorithms that have "laid in the shadows" for years because people saw them as having little utility for interesting AI tasks. Vogelstein notes this view has changed with the advent of scientific literature proving old algorithms can help meet challenges previously considered intractable when matched with more data and bigger computers. "It's interesting to consider whether the money is better spent on the math of building new algorithms or really on assembling better datasets and putting together large computing resources to exploit those datasets using existing algorithms," he says. Among the IARPA-funded AI projects Vogelstein cites are the Janus facial-recognition program, and the Machine Intelligence from Cortical Networks (MICrONS) program to explore brain-derived algorithms.
Technology Review (02/23/16) Will Knight
Baidu is a leading developer of conversational interfaces proliferating widely throughout China, which could enhance human-machine interaction across the world. The application of voice control has become more practical with advances in machine learning, and the interfaces can function even in noisy environments. "I see speech [technology] approaching a point where it could become so reliable that you can just use it and not even think about it," says Stanford University professor and chief Baidu scientist Andrew Ng. "The best technology is often invisible, and as speech recognition becomes more reliable, I hope it will disappear into the background." Ng thinks voice may soon be sufficiently reliable for multiple device interaction, including robots and home appliances. Baidu teams in Beijing and Silicon Valley are working on making speech recognition more accurate and computers more capable of parsing sentence meaning. In November 2015, Baidu's Silicon Valley facility unveiled Deep Speech 2, a speech-recognition engine consisting of a deep neural network that can associate sounds with words and phrases as it is fed millions of examples of transcribed speech. Baidu researchers say the network can accurately identify spoken words, and can sometimes transcribe segments of Mandarin speech with more precision than a human.
How to Tame Your Robot
CMU News (02/29/16)
Carnegie Mellon University researcher Madeline Gannon has created the means to instruct a robot to perform tasks by following a human coder's motions through her design of the open source Quipt software. "I wanted to invent better ways to talk with machines who can make things," Gannon says. "Industrial robots are some of the most adaptable and useful to do that." Quipt replaces the joystick-based method of robot programming with a motion-capture system that enables the machine to see where it is using cameras. The robot sees tracking markers on a person's hand or clothes, and can track them, mimic their movement, or be instructed to avoid markers, potentially boosting both the robot's safety and intelligence. "What's really exciting is taking these machines off of control settings and taking them into live environments, like classrooms or construction sites," Gannon says. Gannon worked with visiting artist Addie Wagenknecht and the Frank-Ratchye Studio for Creative Inquiry on a robot that could rock a baby's cradle based on the sound of the infant's cries. Frank-Ratchye Studio director Golan Levin thinks Gannon's achievements could have a transformative effect on industrial design and the arts, along with how people design architecture, apparel, and furniture.
New Circuit Material Can Be Stretched and Twisted Like Chewing Gum
Motherboard (02/29/16) Michael Byrne
Swiss Institute of Technology in Lausanne (EPFL) researchers say they have developed a new material that enables electronics to be stretched up to four times their original length in all directions. The material withstands maximal stretching up to 1 million times without cracking or losing its conductivity properties. The material consists of a layer of gold and gallium, the latter of which has an unusually low melting point, which means it remains liquid at room temperatures. The liquid metal is patterned onto a thin polymer film, where it functions as the conductive tracks of a normal circuit board. The gold is used to prevent the gallium from beading up and rolling away like water droplets when it comes into contact with the polymer. The EPFL researchers say they were able to use these properties to fabricate conductive tracks on the order of nanometers in width. They say this research could be applied to a range of fields, including artificial skins on prosthetic limbs or robots, and in electronic circuits that can be twisted and stretched into new shapes.
Tech Workers Increasingly Look to Leave Silicon Valley
Quartz (02/29/16) Ashley Rodriguez
A growing number of engineers and technology professionals from the San Francisco Bay area are looking to leave Silicon Valley for other technology hubs such as Austin, TX, and Seattle, WA, according to Indeed.com. The job-search site found as of Feb. 1, 35 percent of tech job searches on Indeed.com from the region were for jobs elsewhere, an increase of about 30 percent from a year ago. In addition, the portion of searches for work outside of the Bay Area was highest among people ages 31 to 40, suggesting people are leaving to find better opportunities elsewhere or to settle down in more affordable areas. Nevertheless, these trends do not diminish Silicon Valley as the primary technology hub, as 66 percent of tech job searchers were still looking for work within the Bay Area, and people from other parts of the country are moving to the region every day. However, the latest trends highlight the growth of technology opportunities in other parts of the country. As the tech talent migrates to smaller cities, the tech companies are following suit. Major tech companies such as Facebook and Google recently have opened offices in Austin, Seattle, and Portland, OR.
In Emergencies, Should You Trust a Robot?
Georgia Tech News Center (02/29/16) John Toon
Georgia Institute of Technology (Georgia Tech) researchers studying human-robot trust in an emergency situation report humans may put too much faith in robots for their own safety. In a mock building fire, test subjects followed the instructions of an "Emergency Guide Robot" even after the machine had proved unreliable, and after some participants were told the robot had broken down. In the emergency scenario, the robot may have become an "authority figure," according to the researchers. They note in simulation-based research done without a realistic emergency scenario, test subjects did not trust a robot that had previously made mistakes. The team envisions groups of robots being stationed in high-rise buildings to direct occupants toward exits and urge them to evacuate during emergencies. "These are just the type of human-robot experiments that we as roboticists should be investigating," says Georgia Tech professor Ayanna Howard. "We need to ensure that our robots, when placed in situations that evoke trust, are also designed to mitigate that trust when trust is detrimental to the human." The research will be presented March 9 at the ACM/IEEE International Conference on Human-Robot Interaction (HRI 2016) in Christchurch, New Zealand.
Google: Self-Driving Car Followed 'the Spirit of the Road' Before Accident
Computerworld (02/29/16) Lucas Mearian
Google reports it has gained insights concerning a Feb. 14 accident in which one of its autonomous vehicles (AVs) was hit by a city bus along a three-lane boulevard in Mountain View, CA. Just before the collision, the self-driving car was in the right lane heading toward an intersection and had made a right-turn signal. Sandbags enclosing a storm drain caused the car to move left into the center of the wide lane while the bus was coming up from behind. The AV test driver assumed the bus would slow or stop, but seconds later the Google car struck the bus' side when it was reentering the center of the lane. Google says the majority of the time the AVs drive in the middle of the lane but "when you're teeing up a right-hand turn in a lane wide enough to handle two streams of traffic, annoyed traffic stacks up behind you. So several weeks ago we began giving the self-driving car the capabilities it needs to do what human drivers do: hug the rightmost side of the lane." Google admits some responsibility for the accident, and says it has since modified its AV software so "our cars will more deeply understand that buses [and other large vehicles] are less likely to yield to us than other types of vehicles."
Disney Automated System Lets Characters Leap and Bound Realistically in Virtual Worlds
EurekAlert (02/26/16) Jennifer Liu
Disney researchers have developed an automated approach to generating life-like character motions in interactive environments. The researchers say the technology could help game designers by easing their workload and providing instant feedback on how characters will perform in three-dimensional (3D) space. "Our new method is a breakthrough in how characters can navigate through a game environment, enabling acrobatic movements normally only seen in big-budget Hollywood films," says Disney researcher Markus Gross. Game designers often have encountered problems when the character makes contact with the environment. Designers must manually annotate how a character grasps a pole, where to set a character's foot, or to determine what motions are possible in a given space. "It can be very tedious, especially for motions that involve intricate contacts between the character and the environment," says Disney researcher Robert W. Sumner. The new system solves this problem by automatically analyzing a database of motion clips to define what motion skills a character possesses. In addition, the system analyzes the 3D environment, identifying the spatial relationships between surfaces, determining which surfaces could physically support a character, and discovering what motions are possible in a given space. During testing, the researchers showed the new system could employ 16 motion skills while controlling 10 characters in a complex environment.
It Might Catch On--First Mathematical Model to Explain How Things Go Viral
University of Aberdeen (02/26/16) Robert Turbyne
A University of Aberdeen-led research team has developed a model that explains how things go viral in social networks, and it includes the impact of friends and acquaintances in the sudden spread of new ideas. "Mathematical models proposed in the past typically neglected the synergistic effects of acquaintances and were unable to explain explosive contagion, but we show that these effects are ultimately responsible for whether something catches on quickly," says University of Aberdeen researcher Francisco Perez-Reche. The model shows people's opposition to accepting a new idea acts as a barrier to large contagion, until the transmission of the phenomenon becomes strong enough to overcome that reluctance. Although social media makes the explosive contagion phenomenon more apparent in everyday life than ever before, it is the intrinsic value of the idea or product, and whether friends and acquaintances adopt it or not, which remains the crucial factor. The model potentially could be used to address social issues, or by companies to give their product an edge over competitors. "Our conclusions rely on numerical simulations and analytical calculations for a variety of contagion models, and we anticipate that the new understanding provided by our study will have important implications in real social scenarios," Perez-Reche says.
Computers Read 1.8 Billion Words of Fiction to Learn How to Anticipate Human Behavior
The Stack (UK) (02/26/16) Martin Anderson
Researchers at Stanford University are using 600,000 fictional stories to inform their new knowledge base called Augur. The team considers the approach to be an easier, more affordable, and more effective way to train computers to understand and anticipate human behavior. Augur is designed to power vector machines in making predictions about what an individual user might be about to do, or want to do next. The system's current success rate is 71 percent for unsupervised predictions of what a user will do next, and 96 percent for recall, or identification of human events. The researchers report dramatic stories can introduce comical errors into a machine-based prediction system. "While we tend to think about stories in terms of the dramatic and unusual events that shape their plots, stories are also filled with prosaic information about how we navigate and react to our everyday surroundings," they say. The researchers note artificial intelligence will need to put scenes and objects into an appropriate context. They say crowdsourcing or similar user-feedback systems will likely be needed to amend some of the more dramatic associations certain objects or situations might inspire.
World's First Parallel Computer Based on Biomolecular Motors
Dresden University of Technology (Germany) (02/25/16)
A new parallel-computing approach can solve combinatorial problems, according to a study published in Proceedings of the National Academy of Sciences. Researchers from the Max Planck Institute of Molecular Cell Biology and Genetics and the Dresden University of Technology collaborated with an international team on the technology. The researchers note significant advances have been made in conventional electronic computers in the past decades, but their sequential nature prevents them from solving problems of a combinatorial nature. The number of calculations required to solve such problems grows with the size of the problem, making them intractable for sequential computing. The new approach addresses these issues by combining well-established nanofabrication technology with molecular motors that are very energy-efficient and inherently work in parallel. The researchers demonstrated the parallel-computing approach on a benchmark combinatorial problem that is very difficult to solve with sequential computers. The team says the approach is scalable, error-tolerant, and dramatically improves the time to solve combinatorial problems of size N. The problem to be solved is "encoded" within a network of nanoscale channels by both mathematically designing a geometrical network that is capable of representing the problem, and by fabricating a physical network based on this design using lithography. The network is then explored in parallel by many protein filaments self-propelled by a molecular layer of motor proteins covering the bottom of the channels.
Abstract News © Copyright 2016 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: firstname.lastname@example.org
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.