Association for Computing Machinery
Welcome to the May 25, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Building the Tools for Bug-Free Software
Government Computer News (05/24/16) Patrick Marshall

The U.S. National Science Foundation has awarded a five-year, $10-million grant to Princeton University professor Andrew Appel's Deep Specification (DeepSpec) project to develop a toolkit for specifying the precise intended functions of software programs in all possible scenarios and for confirming they are performing as expected. Appel says the initial task is requiring coders to specify in standardized form what operations their code is meant to run. He cites the code controlling interaction between software components--the application program interface--as being written in both source code and English. "It is the part that is written down in English that we want to formalize in logic to explain exactly what would happen if you make a call to it," he says. Once there are exact descriptions of the behavior of software components based on formal logic, specifications are written for testing the code. "By formalizing the interfaces in logic, that then allows us to write specifications for proving that some software component does what it is supposed to do and no more," Appel says. DeepSpec co-developer and University of Pennsylvania professor Benjamin Pierce says the project aims "to replace the existing 'patch and pray' mentality, where software developers wait for vulnerabilities to be discovered in the wild."


Researchers Teaching Robots to Feel and React to Pain
IEEE Spectrum (05/24/16) Evan Ackerman

Researchers from Germany's Leibniz University are developing an "artificial robot nervous system to teach robots how to feel pain" and quickly react in order to avoid potential damage. According to Leibniz researcher Johannes Kuehn, shielding robots from damage will protect people as well, given the growing trend of robots being deployed to operate in close proximity to human workers. In collaboration with Leibniz professor Sami Haddadin, Kuehn created a bio-inspired robot controller that mimics pain-reflex mechanisms to react and protect the robot from potential physical harm. "We focus on the formalization of robot pain, based on insights from human pain research, as an interpretation of tactile sensation," Kuehn and Haddadin note. The researchers say their prototype controller features a tactile system using a "nervous robot-tissue model that is inspired by the human skin structure" to decide how much pain the robot should sense for a given amount of force. The model transmits pain information in repetitive spikes if the force surpasses a specific threshold, and the controller reacts after classifying the information as light, moderate, or severe pain.


Automatic Bug Finder
MIT News (05/25/16) Larry Hardesty

Researchers at the University of Maryland and the Massachusetts Institute of Technology's (MIT) Computer Science and Artificial Intelligence Laboratory have moved closer to enabling symbolic execution of applications written via programming frameworks with a system called Pasket, which automatically builds models of framework libraries. Symbolic execution goes through every single instruction a program executes for a broad range of input values, but it becomes unworkable with apps written on modern programming frameworks; this is because the framework includes massive libraries of regularly reused code. "The only thing we care about is what crosses the boundary between the application and the framework," says MIT postdoctoral researcher Xiaokang Qiu. "The framework itself is like a black box that we want to abstract away." The researchers built four software "design patterns" into Pasket, which attempts to fit any given group of program traces into each design pattern, selecting only the one with the best fit. The team compared a model generated by the system with a popular, years-in-the-assembly model of Java's standard library of graphical-user-interface components, and determined the new model filled in several gaps in the other model.


Signs of Creativity in How Robot Solves Problems
ZDNet (05/23/16) Byron Spice

New software from Carnegie Mellon University (CMU) researchers is helping robots deal with clutter, as well as exhibit creative problem-solving skills. For example, CMU professor Siddhartha Srinivasa notes his lab's two-armed robot, the Home Exploring Robot Butler (HERB), was able to behave unexpectedly due to its wrist's 270-degree range. In one case, HERB cradled an object to be moved in the crook of its arm, which Srinivasa says is a behavior the robot learned by itself. Enabling such behavior is rearrangement planning software that also was tested on the U.S. National Aeronautical and Space Administration's KRex robot, which successfully found traversable paths across an obstacle-filled landscape while pushing an object. Srinivasa says robots are proficient at "pick-and-place" (P&P) processes, but these do not scale well in cluttered environments. He notes the rearrangement planner automatically balances the P&P and clutter strategies according to how the robot proceeds in its task. With its basic knowledge of its surroundings' physics, the robot has some concept of what can be pushed, lifted, or stepped on. It also can be trained to pay attention to items that might be valuable or delicate.


Study Reveals Only 1 in 6 Drivers Want Fully-Autonomous Vehicles
Computerworld (05/24/16) Lucas Mearian

Most U.S. drivers do not want to own a fully self-driving car in the future, according to a University of Michigan survey. The poll found 37.2 percent of respondents were "very concerned" about riding in a completely autonomous vehicle, while 66.6 percent were "very or moderately concerned." Only 9.7 percent of respondents indicated they were not at all concerned about riding in a completely self-driving vehicle. In addition, 43 percent of women indicated they were "very concerned" about completely self-driving cars, versus 31.3 percent of men. However, the difference was smaller for partially self-driving vehicles, as only 17.5 percent of women and 16.4 percent of men were very concerned. A previous survey found about 30 percent of respondents in the U.S., Australia, and Britain are "very concerned" about system and vehicle security breaches from hackers and about data privacy in tracking speed and location. In addition, 37 percent are "moderately concerned" about these issues, and nearly 25 percent are "slightly concerned."


Checklist of Worst-Case Scenarios Could Help Prepare for Evil AI
New Scientist (05/23/16) Chris Baraniuk

University of Louisville researcher Roman Yampolskiy and hacktivist Federico Pistono have developed a set of worst-case scenarios for a potential malevolent artificial intelligence (AI). Anticipating as many negative outcomes as possible will help guard against disaster, according to Yampolskiy. The researchers started studying the issue using strategies taken from the field of cybersecurity, and created a list of all the things that could go wrong, which should make it easier to test any safeguards that could eventually be put in place. In one scenario, the researchers envision an AI system that unleashes a global propaganda war that sets governments and populations in opposition, feeding a "planetary chaos machine." The work was paid for by a fund established by Elon Musk, who has described AI as humanity's "biggest existential threat." Yampolskiy cites Microsoft's Twitter chatbot Tay as an example of how AI can quickly get out of control; soon after it launched, Tay went rogue and was tricked into spewing racist comments. Yampolskiy says the incident reveals the unpredictability of such systems. University of Sheffield researcher Noel Sharkey agrees an approach to testing inspired by cybersecurity is a good idea for any system, especially autonomous weapons.


Google Is Launching a New Research Project to See If Computers Can Be Truly Creative
Quartz (05/22/16) Mike Murphy

The Google Brain artificial intelligence (AI) research project has organized a new group, Magenta, to officially launch in June with the mission of determining whether AIs can be trained to create original artwork, such as music or video. The Magenta group will use Google's TensorFlow machine-learning engine, and Google Brain researcher Douglas Eck says the tools Magenta develops will be made publicly available. The first tool to be rolled out is a program to help researchers import music data from MIDI music files into TensorFlow, which will enable their systems to be trained on musical knowledge. Eck says Magenta was inspired by other Google Brain projects such as Google DeepDream, in which AI systems were educated on image databases to "fill in the gaps" in pictures, resulting in psychedelic images that the system could generate. Eck suggests Magenta's goal could be to create an AI system that can create entirely new pieces of music. Although Eck acknowledges artistic creation will continue to involve people on some level, at least for now, he envisions computer-generated music being incorporated within certain scenarios relatively soon.


Computing a Secret, Unbreakable Key
University of Waterloo (05/20/16) Nick Manning

Researchers at the University of Waterloo's Institute for Quantum Computing (IQC) say they have developed the first available software to evaluate the security protocol for Quantum Key Distribution (QKD), which enables two parties to establish a shared secret key by exchanging photons. If an eavesdropper intercepts and measures the photons, it will cause a disturbance that is detectable to the original secret-sharing parties; if there is no disturbance, the original parties can guarantee the security of the shared key. In practice, loss and noise in an implementation always results in some disturbance, but a small amount of disturbance implies a small amount of information about the key is available to the eavesdropper. Characterizing this amount of information enables it to be removed at the cost of the length of the resulting key, but the main theoretical problem with QKD is how to calculate the allowed length of the final secret key for any given protocol and the experimentally observed disturbance. The IQC researchers addressed this problem with a numerical approach, transforming the key rate calculation to the dual optimization problem. The researchers tested the software against previous results for known studied protocols, and found their results were in perfect agreement.


Looking Beyond Conventional Networks Can Lead to Better Predictions
Notre Dame News (05/20/16) William G. Gilroy

University of Notre Dame researchers have found current algorithms to represent networks have not fully considered the complex interdependencies in data, which can lead to erroneous analysis or predictions. The researchers say they have developed a new algorithm that offers more precise network representation and accurate analysis, a general approach that could potentially influence a wide range of fields. "We have made a significant advance in network theory to more accurately and precisely represent complex dependencies in data," says Notre Dame professor Nitesh Chawla. The researchers use the example of how invasive species are driven by the global shipping network. "By identifying higher-order dependencies in ship movements, namely where a ship is more likely to go next given its previous steps, we can more accurately model ship movements, and therefore species flow dynamics, for the analysis and prediction of invasive species," Chawla says. The researchers say their work has strong applications for modeling complex social interactions, the flow of information, infectious disease spreads, automobile movements, and human trajectories, among other fields. "It is a fundamental and transformative advance in network representation to automatically discover the orders of dependency among components of a complex interconnected world," Chawla says.


Using Static Electricity, Insect-Sized Flying Robots Can Land and Stick to Surfaces
UW Today (05/19/16) Jennifer Langston

Roboticists at the Harvard University Microrobotics Lab and a University of Washington (UW) engineer have demonstrated their insect-sized drone's ability to perch mid-flight instead of hovering, dramatically reducing energy consumption while the drone is at a standstill. The drone, nicknamed RoboBee, utilizes electrostatic adhesion in an electrode patch and a foam mount to absorb shock. When the patch is charged, it can stick to almost any surface, using about 1,000 times less energy to perch than it would to hover in place. The researchers say their advancements offer promising opportunities for the use of insect-sized and other biologically inspired drones to monitor atmospheric conditions. "A lot of technologies that have been deployed successfully on larger robots become impractical on a centimeter-sized robot," says UW professor Sawyer Fuller. "We take inspiration from flying insects because they've already found solutions for these challenges." Fuller continues his work as part of UW's Air Force Center of Excellence on Nature-Inspired Flight Technology and Ideas. The RoboBee project was funded in part by Harvard's Wyss Institute, which also develops biologically inspired engineering.


Researchers Use Developer Biometrics to Predict Code Quality
Motherboard (05/22/16) Michael Byrne

Researchers from the University of Zurich developed a system that uses developers' biometric data to predict the quality of the code they produce so likely bugs can be deduced. The system analyzes the coders as they program, and tests were conducted with two teams of Swiss and Canadian developers. Biometric data was collected as the programmers wrote software, and then correlated with interviews with the developers and human-based code reviews. "Our study shows that biometrics outperform more traditional metrics and a naive classifier in predicting a developer's perceived difficulty of code elements while working on these elements," says the Zurich researchers. "Our analysis also shows that code elements that are perceived more difficult by developers also end up having more quality concerns found in peer code reviews, which supports our initial assumption. In addition, the results show that biometrics helped to automatically detect 50 percent of the bugs found in code reviews and outperformed traditional metrics in predicting all quality concerns found in code reviews." The research was presented last week at the International Conference on Software Engineering (ICSE 2016) in Austin, TX.


Researchers Develop New Way to Decode Large Amounts of Biological Data
University of Maryland (05/18/16) David Kohn

University of Maryland researchers have developed the Gibbs Sampler for Multi-Alignment Optimization (GISMO), a computing technique that, when used on massive amounts of genomic sequence data, is both faster and more accurate than current methods. The researchers say GISMO improves upon existing sequence alignment programs, which can mistake random patterns in the data for biologically valid signals. Current methods, known as "bottom up," compare each sequence to every other sequence, a process that takes a prohibitively long time to compute for sets of a 100,000 or more related protein sequences. GISMO, which can be described as "top down," compares each sequence to an evolving statistical model. The researchers note the technique is faster and also better at finding biologically relevant signals. In addition, the technique becomes progressively faster as the size of the data set becomes larger. "Because researchers have been finding ways to speed up and improve conventional methods for decades, and because GISMO takes such a new and different approach, I am confident that we can make GISMO even faster and more accurate going forward," says University of Maryland professor Andrew F. Neuwald. The researchers are offering a free program utilizing the technique to spur research within the biomedical research community.


East Meets West: 'We Are Alfred'
UIC News Center (05/17/16) Francisca Corona

An interactive virtual reality simulation enables users to inhabit the perspective of "Alfred," a 74-year-old man with audiovisual impairments, and to empathize with the experiences of elderly patients. The University of Illinois at Chicago (UIC) study, "We Are Alfred," won first place in the Art/Design/Humanities & Social Sciences category among graduate projects at the UIC Research Forum, as well as the Vesalius Trust Scholarship Award. The product aims to give medical students greater insight into the challenges of patients going through the aging process. The 360-degree immersion is achieved by wearing headphones and the Oculus Rift Development Kit 2 headset. Although the prototype had a completely virtual environment, the final iteration utilizes graphic elements and live scenes in a form of interactive cinema. Users experience six live-action scenes from the patient's perspective, and programming techniques and development tools were used to combine footage and simulate medical issues and symptoms. "The project is focusing on comfort," says UIC professor Eric Swirsky. "It's not curing, it's not curative, it's not even treatment-oriented. It's about comforting and understanding where the patient is so that you can be with him."


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe