Association for Computing Machinery
Welcome to the July 6, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


DARPA Challenge Greatly Propelled Humanoid Robotics--and WPI
Computerworld (07/06/15) Sharon Gaudin

The U.S. Defense Advanced Research Projects Agency's (DARPA) Robotics Challenge has significantly advanced the field of humanoid robotics thanks to contributions from academia, including the Worcester Polytechnic Institute (WPI), whose team made the top third of the challenge's competitors. Moreover, participants say their work has inspired them to be better researchers and educators, especially as they focus on future robotics applications. The challenge's objective was to encourage researchers to build more autonomous, more balanced, and more capable robots that can eventually be dispatched to disaster situations to turn off disabled systems, search for victims, and assess damage. The finals called on each team's robot to execute eight tasks, such as driving a car, opening a door, turning a valve, or walking over a pile of rubble--all within 60 minutes. WPI professor Taskin Padir says the competition was a good showcase of humanoid robots' potential in disaster scenarios, as tasks such as climbing ladders and stairs, and maneuvering through narrow crossings, are well-suited for humanoid forms. One former WPI team member now working on four-legged robots at the Italian Institute of Technology says the DARPA contest gave him experience developing algorithms to help robots cross rough terrain. The WPI team's project manager, Matt DeDonato, also notes his current work with a defense contractor to build military robots is informed by his participation in the Robotics Challenge.


Dartmouth Contests Experiment With 'Human-Quality,' Computer-Generated Creativity
Associated Press (07/05/15) Holly Ramer

Dartmouth College is pushing the limits of what artificial intelligence is capable of with a trio of new contests to see if algorithms can produce creative works indistinguishable from those made by human beings. The three contests--DigiLit, PoetiX, and Algorhythms--will pit computer-generated short stories, poetry, and DJing against human-produced art. For the two writing contests, a panel of human judges will evaluate the short stories and poems. If a computer-generated poem or story is scored as human by the judges, the creators of the algorithm behind that story will win $5,000, with an extra $3,000 prize going to the team that enters the best software. The prizes will be similar for the DJing contest, but the test will be different, with six finalists competing against human DJs at a dance party. Both the software and the human DJs will have 1,000 tracks, to be determined immediately before the competition, to use for their playlists. Revelers will then be asked if they thought the playlist was made by a human or a machine. The contests will run during the upcoming academic year, with prizes awarded next April.


EU Open Source Software Project Receives Green Light
University of Southampton (United Kingdom) (07/02/15)

The European Union's Framework 2020 program has provided 7.6 million euros to help fund an open source software project that will extend the capacity of computational mathematics and interactive computing environments. The OpenDreamKit project will develop software for mathematical tools that researchers can use to run computer models and crunch large amounts of data. The project also will develop virtual computing environment tools for creating interactive documents that can solve equations using computer code, and process and visualize resulting data. Fifteen academic and industry partners will participate in the four-year project. The researchers say the work flow will revolutionize the ability to reproduce a computational experiment and document research data exploration. "The project's aims and approaches link closely to ongoing work at Southampton in our Computational Modeling Group community and the Southampton EPSRC Center for Doctoral Training in Next Generation Computational Modeling," says University of Southampton professor Hans Fangohr. "This engagement with the leading edge development of these tools is a great opportunity to contribute to tools that are of great value to many researchers and students in academia and industry." OpenDreamKit will make the resulting code available for free as open source software.


Cooperative Driving Will Become Common--Data Exchange Between Vehicles and the Road Network Increases Traffic Safety
VTT Technical Research Center (07/01/15)

The international Celtic Plus Cooperative Mobility Services of the Future (CoMoSeF) project recently concluded, yielding a communication system linking cars and drivers with a host of road-side devices that provide information on weather, road conditions, and traffic incidents. The CoMoSeF project brought together the efforts of enterprises and research institutions in Finland, France, Luxembourg, Romania, South Korea, Spain, and Turkey. The goal of the project was to develop tools that can ease the implementation of an intelligent transportation system (ITS) that will make possible "cooperative driving." "As it proliferates, cooperative driving based on communication and data exchange between vehicles and road network systems will noticeably improve traffic safety," says Johan Scholliers, principal scientist at Finland's VTT Technical Research Center. Cooperative driving is expected to be a part of everyday life for drivers by the 2020s. Among the devices developed and tested during the CoMoSeF project was a roadside weather-monitoring station run by the Finnish Meteorological Institute that relays weather updates to vehicles in the vicinity using either short-range WLAN-based technology or the mobile phone network. Another roadside unit uses a camera and laser scanner to monitor a highway ramp for fog conditions.


Google's AI System Gets Its Snark From Humans
Computerworld (07/02/15) Sharon Gaudin

A recent study by Google scientists detailed how their machine-learning and natural-language research project trained a computer to answer various questions by feeding it a database of movie scripts, and some of the system's resulting answers were quite sarcastic. Worcester Polytechnic Institute professor Candy Sidner says industry and academic researchers are striving to augment machine-learning and natural-language processing for use in customer service call centers and help desks, for example. "Remember that the computer...is taking...huge amounts of data and building a model that says, 'If you see this, use this as a response,'" she says. "It doesn't really know what the words mean. It's about correlations between one set of words and another set of words." Curt or sarcastic-seeming answers may be returned because the computer learned those words from the datasets it was trained on, Sidner says. Carnegie Mellon University professor Alan W. Black agrees, pointing out the Google system has no idea it is being snarky. "It's just taking in data and putting out answers," he says.


Computational Science and Data Visualization Take the Spotlight in New Documentary
National Science Foundation (06/30/15) Aaron Dubrow

A new 24-minute high-resolution science documentary about the sun, and featuring data-driven visualizations produced by the National Center for Supercomputing Application (NCSA) at the University of Illinois at Urbana-Champaign, debuted this week. The documentary, "Solar Superstorms," was produced as part of the Centrality of Digitally Enabled Science (CADENS) initiative funded by the U.S. National Science Foundation (NSF). CADENS seeks to spotlight new knowledge that is being produced thanks to the massive data analysis and computing capabilities that are now available to scientists, engineers, and scholars. "The 'Solar Superstorms' dome show is a direct result of scientific research whose advance is dependent on extremely powerful computer simulation and visualization," says NSF program director Rudolf Eigenmann. The film is narrated by British actor Benedict Cumberbatch and features visualizations developed by NCSA's Advanced Visualization Laboratory (AVL) based on research conducted at several institutions. The visualizations include one of the solar wind interacting with the Earth's magnetic field during a solar storm that AVL built using data from numerical simulations conducted by researchers at the University of California, San Diego using NCSA's Blue Waters supercomputer. The documentary's animations also draw on research from the National Center for Atmospheric Research, Los Alamos National Laboratory, Michigan State University, and the Georgia Institute of Technology.


The Social-Network Illusion That Tricks Your Mind
Technology Review (06/30/15)

Researchers at the University of Southern California (USC) have used synthetic and actual networks to see how social networks create the illusion that something is common locally when it is really rare globally. This illusion is based on a power law similar to that which governs the distribution of friends on social networks. Most people have a small number of friends while a few individuals have many friends, and the latter group skews the average. Kristina Lerman, a project leader at USC's Information Sciences Institute, and colleagues found a corresponding paradox, known as the majority illusion. This happens when an individual can monitor a behavior or attribute in most of his or her friends, even though it seldom occurs in the network as a whole. The researchers set up a theoretical array of 14 nodes connected to a small world network, with three nodes colored while the remaining nodes that linked to them in a single step were counted; the structure of the network remains consistent even though the colored nodes can change. The majority illusion is the local impression that a specific attribute is wide-ranging when in fact is it sparse on a global perspective, as reflected in the node cluster when the most popular nodes are colored and link to the highest number of other nodes.


MIT's Bitcoin-Inspired 'Enigma' Lets Computers Mine Encrypted Data
Wired News (06/30/15) Andy Greenberg

A pair of Bitcoin entrepreneurs and the Massachusetts Institute of Technology's Media Lab this week revealed a prototype for an encryption system that uses some of the techniques employed by Bitcoin to achieve what is known as homomorphic encryption. This type of encryption is meant to enable a third party to handle encrypted data and use it in computations without having to decrypt it. Homomorphic encryption is considered a sort of Holy Grail in the era of cloud computing, when sensitive personal and corporate data is constantly moving between different devices. The system, dubbed Enigma by its creators, mimics several features of Bitcoin's decentralized network architecture; for example, by breaking data up into encrypted chunks that are randomly distributed to several computers, or nodes, across the Enigma network. Each node can perform calculations on its discrete chunk of the data, but because it only has a piece it cannot know what the whole data is. The creators say nodes can collectively perform every kind of normal computation on data encrypted this way and the Enigma method increases computation time only 100-fold, which compares favorably to the first method of homomorphic encryption, which increased compute time close to a millionfold.


New Approach to Online Compatibility
Phys.org (06/30/15) David Bradley

University of Isfahan researchers have developed a new approach to finding potential friends and contacts on online social networks. The researchers say their approach avoids the problem of users being unable to describe themselves adequately or having the right pre-defined keywords. It uses an algorithm featuring a "semantic similarity" prong that ensures two different people who list their interests as "photography, football, and fashion" and "taking pictures, basketball, and selling clothes" would still be matched to photography even though their specific keywords do not coincide. A second prong would match the pair based on the complementarity of football and basketball based on these both being sports, while a third prong would associate the two based on their interest in fashion and selling clothes. The team demonstrated a proof of principle for their algorithm based on semantic similarity and conceptual complement with a small sample of users of an online social network, and future work will incorporate associative complementarity. The researchers note error rates for preliminary tests with this sample group were low compared with conventional keyword-matching algorithms.


Simple Statistics Improve the Quality of Digital Images
Imperial College London (06/26/15) Simon Levey

Imperial College London (ICL) researchers have developed software they say can improve the reliability of pictures taken by a microscope camera. The software determines the exact properties of each individual pixel, based on a statistical analysis of thousands of images, and adjusts the data captured by each pixel accordingly. The researchers say their technique improves the fine details of the whole picture, making it much more reliable. They demonstrated the software's effectiveness by correcting pictures taken by cameras on the U.S. National Aeronautics and Space Administration's Mars rover Curiosity. The researchers applied the software to two different pictures of the Martian surface from one of the rover's cameras, and found the images contained 20-percent similarities in their fine details that were inserted by the camera technology. The software was able to repair some obvious spots and improved the accuracy of the Mars images by up to 20 percent. "The mathematics behind this approach is of an almost embarrassingly simple level of statistics such as you study at secondary school: the correction of averages and standard deviations," says ICL professor Marin van Heel.


Stanford Engineering Students Teach Autonomous Cars to Avoid Obstacles
Stanford Report (06/29/15) Bjorn Carey

Stanford University researchers are developing obstacle-avoidance algorithms that could lead to safe autonomous driving. The researchers are testing the algorithms on a pop-up obstacle they created out of a tablecloth and a leaf blower. "We're trying to develop a control algorithm that can safely use all of the car's performance capabilities to avoid obstacles and safely perform on a public road," says Joseph Funke, a graduate student in Stanford's Dynamic Design Lab. "The hope is that if we can figure these things out in a controlled research environment, we can extend those capabilities to autonomous cars that might come out in the future." The researchers used X1, a dune buggy-style car specifically designed for testing autonomous driving programs, to develop the algorithm and see how well it could avoid a pre-programmed obstacle. The researchers created a makeshift airbag by cutting up a plastic tablecloth and sewing a tube into it, which was then inflated using an electric leaf blower. During testing, the algorithms helped the car avoid the obstacle while driving right at the edge of its handling abilities. The researchers will test the software on Shelley, Stanford's autonomous Audi TTS race car, and other vehicles.


Reading Research Papers in 'Unprecedented Depth'
Research Information (06/29/15)

In an interview, Carnegie Mellon University's (CMU) Eduard Hovy says the purpose of the U.S. Defense Advanced Research Projects Agency's Big Mechanism project is to advance analytics via automated technologies to help explain the underlying causes and effects of complex systems such as cancer. "Research teams are in the process of creating computerized systems that will read scientific and medical research papers in unprecedented depth and detail," he says. "Through this deep reading they can both uncover and extract relevant information; then integrate those initial fragments of knowledge into computational models of cancer pathways that already exist, updating them accordingly." Automated deep reading would enable the machines to read the articles on a deeper level, and make judgments about statements and findings, filtering out only information that either supports or adds to existing knowledge. Once this is done, providing accurate input for those executing the modeling becomes easier, according to Hovy. Key challenges the project seeks to tackle include the fact that the "methods" sections in many articles fail to account for all the steps the authors followed to reach their conclusions. Contradictory statements featured in papers is another challenge Hovy cites. He says CMU partnered with Elsevier on the project to leverage technologies that sift through the full text of both literature-based and experimental evidence, as well as relevant clinical data.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe