Welcome to the October 25, 2013 edition of ACM TechNews, providing timely information for IT professionals three times a week.
Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
Self-Driving Cars Could Save More Than 21,700 Lives, $450B a Year
Computerworld (10/24/13) Lucas Mearian
Autonomous vehicles could save many lives and an enormous amount of money through accident avoidance and congestion reduction, among other techniques, according to a new study from the nonprofit Eno Center for Transportation. The report estimated that up to 4.2 million accidents could be prevented, saving 21,700 lives and $450 billion in related costs annually, if 90 percent of the vehicles on U.S. roads were self-driving. Collisions could be avoided if the computer-controlled autos could sense and anticipate road conditions and surrounding objects, the study determined. Meanwhile, freeway and artery congestion could be cut by more than 75 percent through vehicle-to-vehicle and vehicle-to-infrastructure communication by autonomous cars and trucks. The study's authors note that high numbers of autonomous vehicles must be present for such outcomes to be achieved. "For example, if 10 percent of all vehicles on a given freeway segment are [autonomous], there will likely be an [autonomous vehicle] in every lane at regular spacing during congested times, which could smooth traffic for all travelers," they point out. However, various issues must first be addressed with self-driving vehicles, including the extent to which functionality would be automated, whether onboard computers could be made hack-proof, and who would be liable in the event of an accident in an autonomous car.
DARPA Announces Cyber Defense Tournament With a $2 Million Cash Prize
Help Net Security (10/24/13)
The U.S. Defense Advanced Research Projects Agency (DARPA) is planning the Cyber Grand Challenge (CGC), the first-ever tournament for fully automatic network defense systems. As part of the CGC, DARPA wants teams to create automated systems that would compete against each other to evaluate software, test for vulnerabilities, generate security patches, and apply them to protected computers on a network. The winning team will receive $2 million, the second-place winner will receive $1 million, and third place wins $750,000. "Through automatic recognition and remediation of software flaws, the term for a new cyberattack may change from zero-day to zero-second," says DARPA's Mike Walker. With the CGC, DARPA aims to challenge unmanned systems to compete against each other in a real-time tournament for the first time. "The growth trends we've seen in cyberattacks and malware point to a future where automation must be developed to assist IT security analysts," says DARPA's Dan Kaufman. As part of the CGC, competitors will navigate a series of challenges, starting with a qualifying event in which a collection of software must be automatically analyzed. DARPA will pick teams from the qualifying event to participate in the CGC final event, which is set for early to mid-2016.
Why Facebook Is Teaching Its Machines to Think Like Humans
Wired News (10/23/13) Daniela Hernandez
Facebook is turning to deep learning to teach computers to more closely imitate the human brain, with the goal of gaining greater insight into individual users. Natural language processing is one area in which Facebook hopes to advance, as its Graph Search tool released earlier this year is expanding to make everything a user does on Facebook, including posts and comments, searchable. "Humans differ in the way they use language because of differences in their cultural upbringing," says text analytics company Semantria CEO Oleg Rogynskyy. "We still need to teach machines these nuances." This fall, Facebook launched a deep learning research group, as Google, Microsoft, IBM, and Baidu also have done. Deep learning relies on neural networks, which are multi-layered software systems modeled on the brain that collect information and build an understanding of objects and words. Because neural networks can learn on their own, human engineering is not as important as with previous machine-learning methods, but vast quantities of data on which to train are essential. The next step in deep learning will be to create algorithms that can better understand opinion, sentiment, and emotion. This technology will ultimately enable Facebook and other companies to target individual users in a very precise way to improve user experience, enhance brand loyalty, and sell products.
Researchers Tout Electricity Storage Technology That Could Recharge Devices in Minutes
Network World (10/23/13) Michael Cooney
Silicon-based supercapacitors could enable mobile phones to recharge in seconds and continue to operate for weeks without recharging. In a new paper, researchers at Vanderbilt University note that silicon supercapacitors store electricity by assembling ions on the surface of a porous material. Porous silicon has a controllable and well-defined nanostructure made by electrochemically etching the surface of a silicon wafer, which can yield surfaces with optimal nanostructures for supercapacitor electrodes. However, silicon is generally not suitable for use in supercapacitors because it reacts readily with some chemicals in the electrolytes that provide the ions that store the electrical charge. To resolve this issue, the Vanderbilt team coated the porous silicon surface with carbon to chemically stabilize the silicon surface. When they used it to make supercapacitors, the researchers found the graphene coating improved energy densities by over two orders of magnitude compared to those made from uncoated porous silicon and significantly better than commercial supercapacitors. Vanderbilt professor Cary Pint says the researchers are using this approach to develop energy storage that can be formed in the excess materials or on the unused back sides of solar cells and sensors. The supercapacitors would store the extra electricity the cells generate at midday and release it when the demand peaks in the afternoon.
Workshop on Opportunities in Robotics, Automation, and Computer Science
CCC Blog (10/22/13) Ann Drobnis
A workshop Monday at the White House Conference Center provided the robotics and computer research communities with more information on how they can help manufacturers innovate. Robotics VO, the U.S. National Science Foundation, the White House Office of Science and Technology Policy, and the Computing Community Consortium brought together 28 participants from industry, academia, and government to discuss opportunities for manufacturing in robotics, automation, and computer science. Among the emerging themes was the need to "automate automation," or streamline the design of assembly lines and deploy robots to reduce the time to start production, independent of the product mix or volume. Another theme was missing middleware that makes it difficult to generalize from successful deployments of components for specific tasks and transfer solutions across different manufacturing equipment and products. A final report, with plans for improving collaboration, should be ready before the end of the year and will include suggestions for creating better methods of collaboration that provide concrete problems for proposals submitted to the updated National Robotics Initiative solicitation.
Synthetic Biology Ramps Semiconductors
EE Times (10/23/13) R. Colin Johnson
The initial phase of the Semiconductor Synthetic Biology program will distribute $2.25 million in funding over three years to researchers at the Massachusetts Institute of Technology (MIT), Yale University, Georgia Tech, Brigham Young University, University of Massachusetts, and University of Washington. As part of the project, synthetic biology will be used to re-engineer materials for useful purposes in the fabrication of advanced semiconductors. The long-term goal of the project is inventing new types of living cells that can be integrated into hybrid biological semiconductors. "Cells compute with chemistry and semiconductors compute with transistors--but both are about the controlled flow of electrons," says MIT professor Rahul Sarpeshkar. A second area of research is cytomorphic-semiconductor circuit design, which applies a recent understanding of cell biology to new ultra-low-power microchip architecture. "One of the main goals of this program is to create information processors with energy consumption 100- to 1,000-times less than today," says Semiconductor Research Corp.'s Victor Zhimov. A third area of research will explore new bio-electric sensors, actuators, and energy sources that integrate biological materials onto complementary metal-oxide chips to create hybrid bio-semiconductors. "Our goal...is to integrate living cells on a semiconductor chip and let them work together--the holy grail being to use living cells as computers," Zhimov says.
Smuggler-Spotting Software Sniffs Out Dodgy Shipments
New Scientist (10/23/13) Hal Hodson
Machine intelligence could make it easier to spot illicit cargo among the goods that move through the world's ports. Pacific Northwest National Laboratory's Antonio Sanfilippo has led the development of a data-mining system that is designed to scan millions of ship manifests to find questionable cargoes. His team developed an algorithm to analyze 2.4 million shipping records from industrial data broker PIERS. The algorithm uses the manifest information to assign a record to one of 25 clusters, and then finds the outlying records in each cluster, which are those that do not fit in with the existing patterns for those routes, or are carrying an unusual cargo for that ship. Suspicious records would then be investigated. Sanfilippo plans to use the data to create a network of all the shipping organizations and their connections in order to enable the system to spot suspicious links. The Stockholm International Peace Research Institute's Hugh Griffiths notes preventing smuggling is getting increasingly complex. "It's become much harder to detect narcotics shipments, counterfeit goods, and arms," Griffiths says. "It's a very complex issue, and no one has been able to solve it."
Technology Mimics the Brushstrokes of Masters
The New York Times (10/23/13) Nina Siegal
Emerging three-dimensional (3D) printing technologies are enabling high-quality reproductions of art masterpieces, copying color, brushstrokes, and paint thickness in an exact manner, which the art world is exploring for conservation, research, and commercial potential. For example, the Van Gogh Museum this year collaborated with Fujifilm to create the first fully color-corrected 3D copies of several renowned works by Vincent van Gogh. In addition, researchers from Delft University of Technology, in cooperation with Canon subsidiary Oce, released 3D copies of Rembrandt's "Jewish Bride" and other works. The Fujifilm researchers worked with the Van Gogh Museum for seven years to develop a technique known as Reliefography, which merges a 3D painting scan with a high-resolution print. The Delft University group created an imaging device to record color and topographical data from painting surfaces. In addition, the researchers used X-ray fluorescence to perform a chemical analysis of pigment components, and hyperspectral imaging to gather color data from across the electromagnetic spectrum. This data created a set of volumetric data similar to a 3D pixel, and Oce generated a high-resolution 3D print based on the color information. Beyond reproductions, 3D scanning could help art experts examine paint layers to learn about the structure of a work or to document a painting's condition before and after loaning a piece to another institution.
Noise Pollution Maps Crowdsourced From Smartphone Data
Technology Review (10/22/13)
Researchers at the Commonwealth Scientific and Industrial Research Organization (CSIRO) say they have developed an improved method of creating noise pollution maps that uses crowdsourced smartphone data. Measuring noise pollution in large metropolitan areas on a systematic basis is difficult because noise levels change over relatively short distances and over the course of a day, making the maps time-consuming and costly to create. CSIRO researchers say using smartphone data can simplify this task and make it less expensive. To ensure that readings are only taken outside, the smartphone uses a global positioning system measurement. The phone then determines whether ambient conversations are taking place, and if so, waits until they are done to avoid a skewed reading. In addition, the phone can use built-in sensors such as the proximity sensor and accelerometer to determine whether it is being held in a person's hand, because readings taken from a bag or pocket are inaccurate. If the smartphone meets all of the researchers' criteria, it records an ambient sound level reading, location, and time, which is transmitted to a central server when the phone is in a Wi-Fi zone. The central server uses all of the crowdsourced readings to generate a map, which the researchers say is accurate enough to reconstruct data recorded using conventional sound level meters, even when up to 40 percent of the original data points are missing.
Behind the Scenes at Google's Quantum AI Lab
HPC Wire (10/22/13) Tiffany Trader
Google's Quantum Artificial Intelligence (AI) Lab in May installed the 512-qubit D-Wave Two computer from D-Wave Systems, under the direction of the U.S. National Aeronautics and Space Administration (NASA), to explore quantum computing and space research. "The overwhelmingly obvious killer app for quantum computation is optimization," says D-Wave's Geordie Rose. Optimization is necessary because obtaining useful information is increasingly difficult as problems grow more complex and volumes of data rise. Nevertheless, researchers still do not know the best applications for quantum computing, says NASA's Eleanor Rieffel. "We don't know what the best questions are to ask that computer," Rieffel says. "That's exactly what we're trying to understand now." Google's AI Lab researchers say quantum computing can solve the most daunting computer science problems of the day. "We're particularly interested in how quantum computing can advance machine learning, which can then be applied to virtually any field: from finding the cure for a disease to understanding changes in our climate," the researchers say on their website.
Software Takes Advantage of Collective Intelligence to Improve Decision-Making
RUVID Association (10/21/13)
A New Era in Disaster Relief
Harvard Gazette (10/18/13) Alvin Powell
The International Federation of Red Cross and Red Crescent Societies recently released a report on the potential for technology to advance humanitarian disaster response. The report features several case studies, such as Chicago doctor Zaher Sahloul, who used social media to organize more than $5 million in medical supplies and donations to Syria. Sahloul also used YouTube to provide medical advice videos to doctors in Syria, and worked with Internet systems engineer Dishad Othman to find secure ways for people in Syria to communicate online. The report notes that technological tools also can raise the effectiveness of early warning systems. For example, the Red Cross has prioritized the use of social media to communicate during disasters, and has trained volunteers to communicate with the public via social media. However, as humanitarian groups increase their use of technology, they must be wary of responding only to those with the technology who ask for help, because populations that cannot afford devices such as cell phones are usually at the highest risk in a disaster. Neutrality toward military issues also is important as humanitarian groups turn to technology, because armed forces will have access to these groups' communications.
Exploring Digitization at MIT Media Lab
SearchCIO.com (10/16/13) Linda Tucci
Andrew Lippman, associate director of the Massachusetts Institute of Technology's (MIT) Media Lab, says the diversity of the lab's researchers continues to be essential to its breakthrough work in the digitization of media and other areas, but he also notes that innovation is still being restricted by space limitations. "Doing things creative and original still requires everybody to be in the same place, scrawling on the same board, interrupting each other, and shoving people and ideas out of the way and into the forefront," Lippman observes. He sees, for example, a lack of fidelity and democratization in conferencing systems designed to facilitate communication and collaboration between remote participants. "You still don't get the winks as good," Lippman says. "You still don't get the asides, and you still don't get a little bit of the body language." Lippman was taking part in MIT's recent EmTech conference on emerging technologies. He says his group continues to work on "revived and renewed technologies by which we can extend creativity remotely."
Abstract News © Copyright 2013 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.