Association for Computing Machinery
Welcome to the September 28, 2012 edition of ACM TechNews, providing timely information for IT professionals three times a week.

ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Google Adds Coral Reef Panoramas to Street View Maps
BBC News (09/26/12)

Google recently added panoramas of coral reefs to its Street View services, enabling users to navigate their way around the sites. The material was gathered by the Catlin Seaview Survey, a project studying the health of the reefs, including the impact of global warming. "We want to be a comprehensive source for imagery that lets anyone explore anywhere," says Google ocean program manager Jenifer Foulkes. To get the images of the coral reefs, the researchers developed a submersible fitted with three wide-angle lenses designed to take high-resolution images in low-light conditions. "The main reason is to record reef environments on an unprecedented scale and reveal them to the world," says project director Richard Vevers. To analyze the images, University of Queensland researchers are developing both image-recognition software to identify creatures recorded in the photographs and three-dimensional modeling programs to monitor how the habitats change over time. "It's analyzing the health of the reef in terms of species distribution, and mapping that against the structure of the reefs to discover what reefs are important," Vevers says.


Lane-Keeping App Makes Any Car Smarter
New Scientist (09/25/12) Paul Marks

Dartmouth College researchers have developed CarSafe, a smartphone app that monitors how a driver blinks to determine if the driver is drowsy. The phone is mounted on the windshield, and the front-facing camera monitors the driver's head pose, gaze direction, and blink rate. The back-facing camera looks at the road to make sure the car is a safe distance from the vehicle in front, and that it is not drifting out of the lane. The researchers have developed software that continually switches between the two cameras, which means it can only analyze scenes at a rate of eight frames per second. "But the next generation of phones will allow software to access both cameras simultaneously, removing that bottleneck," says Dartmouth researcher Andrew Campbell. "And with advent of quad core and 16-core phones in the future, I would expect 20 to 30 [frames per second] on each camera."


UC Berkeley Researchers May Track Twitter Hackers
EE Times (09/25/12) Rick Merritt

A $10 million grant from the U.S. National Science Foundation will enable University of California, Berkeley researchers to spend the next five years exploring how to contest hackers on social networking sites. The security project at Berkeley's International Computer Science Institute (ICSI) will try to anticipate and block social-networking attacks, such as emerging efforts to accumulate and sell large numbers of Twitter followers. Researchers believe the spam-email economy may decline, and social networking will likely become the platform for those involved. ICSI also has several other projects in the works, which range from devising new algorithms for voice recognition and video search to writing open source code for networking gear. Nelson Morgan, a researcher working on new approaches to voice recognition, says there are fundamental flaws as people currently use products such as Apple's Siri. "You have to make certain mathematical assumptions that people know are wrong, but you cover that up by using huge amounts of statistical data and limiting the domain," Morgan says. Researchers are trying to find new approaches based on recorded signals of brain patterns and using multicore processors' parallel computing capabilities.
Share Facebook  LinkedIn  Twitter  | View Full Article - May Require Free Registration | Return to Headlines


Meet Mira, the Supercomputer That Makes Universes
The Atlantic (09/25/12) Ross Andersen

One of the largest and most complex universe simulations ever attempted will be run in October by Mira, the world's third fastest supercomputer. The model will condense more than 12 billion years' worth of cosmic evolution into two weeks, tracking trillions of particles as they form into the universe's web-like structure. Argonne National Laboratory physicist Salman Habib says this structure remains consistent over many universe simulations of increasing scale. He says the size of supercomputers such as Mira, which has nearly a petabyte of memory, makes universe simulations possible thanks to the tremendous increase in speed. "If you tried to do a simulation like this on a normal computer, you wouldn't be able to fit it, and even if you could fit it, if you tried to run it, it would never finish," Habib notes. He predicts that next-generation computers may require new models for programming, powering, or error correction because the physical limits of Moore's Law will have been reached. "There is some hope that there will be investment [in technologies to exponentially ramp up computer speed], because supercomputer simulations are increasingly being used outside the basic sciences," Habib says.


ITU Predicts 25 Billion Networked Devices by 2020
V3.co.uk (09/24/12) Dan Worth

There will be as many as 25 billion devices online by 2020 as the Internet of things revolution takes off, and the proliferation of technologies such as machine-to-machine communications will be the primary driver of the growth, according to the International Telecommunications Union (ITU). "By 2020, the number of connected devices may potentially outnumber connected people by six to one, transforming our concept of the Internet, and society, forever," ITU says in its annual State of Broadband report. ITU also predicts there will be 10 billion mobile broadband connections as the developing world uses smartphones and tablets as its primary means of Internet connection. The United Nations organization notes mobile devices could help close the digital divide while creating new issues of exclusion based on content and capabilities. "Given the prolific spread of mobile, in the future, the digital divide may no longer describe disparities in access, but instead denote disparities in speed and functionality--or more specifically, what people can do with their mobile devices," the report says.


Artificially Intelligent Game Bots Pass the Turing Test on Turing’s Centenary
University of Texas at Austin (09/26/12)

University of Texas at Austin researchers recently won the BotPrize by convincing a panel of judges that their bot, called UT^2, was more human-like than half of the humans it competed against. "The idea is to evaluate how we can make game bots, which are nonplayer characters controlled by [artificial intelligence] algorithms, appear as human as possible," says University of Texas professor Risto Miikkulainen. The competition involves bots facing off in a tournament against one another and about an equal number of humans, with each player trying to score points by eliminating its opponents. The bot that is scored as most human-like by the human judges is named the winner. The complex three-dimensional (3D) environments of the game require that bots mimic humans in several ways, including moving around in 3D space, engaging in chaotic combat against multiple opponents, and reasoning about the best strategy at any given point in the game. Some of UT^2's behavior is modeled on previously observed human behavior, while its central battle behaviors are developed through neuroevolution, which runs artificially intelligent neural networks through a program that is modeled on the biological process of evolution.


Mathematics and Fine Art: Digitizing Paintings Through Image Processing
SIAM Connect (09/25/12)

The Society for Industrial and Applied Mathematics' Journal of Imaging Sciences recently published a paper describing a technique for automatically producing digital reproductions of paintings. The method involves fusing photographs taken from different angles through statistical models, which can eliminate glare, highlights, and motion blur. The paper's authors say the statistical models reduce noise and compensate for optical distortion, solving the problem of uncontrolled illumination and destructive reflection. "This article demonstrates the possibility of acquiring a good quality image of a painting from amateur snapshots taken in bursts from different angles, in normal museum illumination," says paper author Jean-Michel Morel. The photographer must take as many pictures from as many angles as possible. "This acquisition is then followed by an intensive [but fully automatic] post-production chain, whose mathematical and algorithmic definition is precisely the object of the article," Morel says. Once the process rids the image of imperfections, the color, mapping, contrast, and other subjective features are left to the photographers and artists. The authors note the technique also could be used in image restoration, or applied to three-dimensional objects.


Scientists Simulate Clothing Sounds for Computer Animation
Cornell Chronicle (09/25/12) Bill Steele

Cornell University researchers have developed a method for synthesizing cloth sounds by using digitally synthesized sounds as a guide to insert real recorded sounds. The method involves synthesizing an approximate sound based on friction and crumpling of the various kinds of cloth. The researchers made controlled recordings of friction sounds by spinning a cloth-covered roller while holding a piece of cloth against it, and crumpling sounds by manipulating a piece of cloth while avoiding any sliding contact. The target sound is reduced to microsecond-length chunks, which the computer matches against a database of similar chunks or real sounds, which are then reassembled to produce a convincing soundtrack. The researchers say their approach was inspired by speech synthesis. "People thought they could synthesize speech using oscillator models, but it didn't sound realistic, sort of like 'Speak & Spell,' so they ended up concatenating bits of recorded human speech for realism," says Cornell professor Doug James. The research was presented at the recent SIGGRAPH 2012 conference.


NSF and Mozilla Announce Winning Big Ideas for New Applications on a Faster, Smarter Internet of the Future
National Science Foundation (09/26/12)

Mozilla Ignite announced eight winning ideas for innovative applications demonstrating what the future Internet might look like. The program called for application ideas that would advance national priorities such as healthcare, public safety, clean energy, and transportation. The Mozilla Ignite Challenge is part of the Obama Administration's U.S. Ignite Initiative, a coordinated effort to facilitate next-generation public sector applications. "After the challenge was announced, the response was exciting and encouraging, not just in terms of the number of submissions but in quality and creativity and impact embodied in each," says Mozilla Ignite project manager Will Barkis. Mozilla Ignite's Gold Medal went to a group of McGill University researchers who developed an idea for a system for Real-Time Emergency Response Observation and Supervision. The technology aims to provide live, high-quality video from multiple feeds and incorporate sensor data from a variety of sources. The Silver Medal winners include a Purdue University team that developed remote process control using a reliable, real-time protocol, and a Seneca College group that developed real-time 3D interactive telepresence technology. Mozilla Ignite's Bronze Medal winners include researchers from Boston University, Virginia Tech, and the University of California, Berkeley.


App Lets You Monitor Lung Health Using Only a Smartphone
University of Washington News and Information (09/18/12) Hannah Hickey

University of Washington, UW Medicine, and Seattle Children's hospital researchers have developed SpiroSmart, a digital spirometer designed for smartphones that allows users to monitor their lung function by blowing into the device. Conventional spirometers require users to blow into a tube with a small turbine that measures the speed of the flow. The spirometer measures how much and how fast the patient can breathe out, which indicates whether their airways are narrowed or blocked. The researchers were able to model a human trachea and vocal tract as a system of tubes to replace the spirometer, and use a smartphone to analyze the sound wave frequencies to detect when the breath is resonating. "There are resonances that occur in the signal that tells you about how much flow is going through the trachea and the vocal tract, and that's precisely the quantity that a clinician needs to know," says Washington professor Shwetak Patel. Researchers tested the system on 52 mostly healthy volunteers using an iPhone 4S smartphone and its embedded microphone. They discovered that SpiroSmart showed results that came within 5.1 percent of commercially available spirometers.


Humanitarian Response and CRICIS--A Report From a Grassroots Workshop
CCC Blog (09/24/12) Kenneth Hines

Texas A&M University professor Robin Murphy recently co-organized Connecting Grassroots to Government for Disaster Management, a visioning workshop about the role of computing in disaster management. As part of the conference, Murphy briefed 60 physical and 150 remote participants for the Critical Real-Time Computing and Information Systems (CRICIS) report, which identified unique fundamental computing research questions in disasters. Murphy highlighted SeaSketch, a marine biologist-developed, Web-based system that enables California residents to go to a map of the shore and propose a marine protected area by interactively drawing boundaries. The system then automatically evaluates the appropriateness of the area, provides feedback to the designer, and ranks the area for the agency. The workshop emphasized that the humanitarian response community needs computing advances beyond consumer-oriented apps and infrastructure. "Their computing needs (and creativity) are exceeding what an average engineer or scientist can do in a year, thus out of range of their budgets or the time an expert can donate," Murphy stressed.


New Tool for CSI? Geographic Software Maps Distinctive Features inside Bones
Ohio State University Research News (09/25/12) Pam Frost Gorder

Ohio State University researchers have developed a method for using the ArcGIS mapping program to identify features inside a human foot bone. The researchers wanted to determine whether the patterns of change inside the bones of human remains could reveal how the bones were used. "Based on certain scientific criteria that you give it, the software gives you a statistical measure of whether the objects you’re looking at actually constitute a cluster," says Ohio State professor Julie Field. The researchers say their study is the first to use geographic information system software to map bone microstructure. This "work allows us to visualize, analyze, and compare the distribution of microscopic features that reflect the development and maintenance of bones, which we can relate to skeletal health and disease," says Ohio State professor Sam Stout. As part of their study, the researchers examined the cross section of a metatarsal bone and demonstrated how the software could be used to show the loads experienced in the foot during gait. Ohio State researcher David Rose notes that more bones will have to be studied before the software can provide meaningful insight into bone biology.


Google Spans Entire Planet With GPS-Powered Database
Wired News (09/19/12) Cade Metz

A new Google research paper details the workings of the Spanner data storage and compilation system, billed as the first database capable of quickly storing and retrieving information across a worldwide network of data centers while maintaining the information's consistency. Spanner interfaces with a network of servers outfitted with atomic clocks or global positioning system (GPS) antennas to facilitate accurate synchronization of data distribution. Spanner can store data across millions of servers and multiple data centers using the TrueTime applications programming interface (API), which employs the atomic clocks and GPS antennas to ensure that network operations run in tandem. Google spreads clocks across its network instead of attempting to enhance communication between servers. It outfits various master servers with GPS antennas or atomic clocks and, working in lockstep with the TrueTime APIs, these time keepers maintain synchronization across the entire network. TrueTime communicates to the servers the degree of uncertainty there is over the current time, and they can modify their reads and writes accordingly. Google researchers say Spanner helps the company replicate and route data across its network, and augments data center upgrades and repairs.


Abstract News © Copyright 2012 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: technews@hq.acm.org
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe


About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2014, ACM, Inc.