Association for Computing Machinery
Welcome to the June 25, 2010 edition of ACM TechNews, providing timely information for IT professionals three times a week.

HEADLINES AT A GLANCE


'Quantum Computer' a Stage Closer With Silicon Breakthrough
University College London (06/23/10) Weston, Dave

Researchers from the University of Surrey, Heriot-Watt University, the University College London, and the FOM Institute for Plasma Physics have developed a method for controlling electrons in silicon, which they say is a significant step toward the development of an affordable quantum computer. "This is a real breakthrough for modern electronics and has huge potential for the future," says Surrey professor Ben Murdin. "In our case we used a far-infrared, very short, high-intensity pulse from the Dutch FELIX laser to put an electron orbiting within silicon into two states at once--a so-called quantum superposition state." The researchers say the breakthrough means that a silicon-based quantum computer could be developed in the near future. "Crucially our work shows that some of the quantum engineering already demonstrated by atomic physicists in very sophisticated instruments called cold atom traps can be implemented in the type of silicon chip used in making the much more common transistor," Murdin says.


Computers Make Strides in Recognizing Speech
New York Times (06/24/10) Lohr, Steve; Markoff, John

Artificial intelligence's (AI's) progress in speech recognition has helped the technology achieve significant mainstream acceptance. For example, the number of American doctors using speech software to record and transcribe accounts of patient visits and treatments has more than tripled in the past three years to 150,000. Although there are concerns that advances in AI speech recognition will eliminate jobs in call centers and other venues, there also are expectations that innovations will fuel new opportunities for both individuals and entrepreneurial businesses. Numerous companies are investing in services that hint at the notion of machines that can act on spoken commands. There also have been great improvements in speech recognition systems in automobiles, with Ford Motor expanding the number of speech commands its vehicles recognize from 100 words to 10,000 words and phrases. Brown University professor Andries van Dam says that AI is "getting to be very good machine intelligence. There are going to be all sorts of errors and problems, and you need human checks and balances, but having artificial intelligence is way better than not having it."


Hop, Jump and Stick
Ecole Polytechnique Federale de Lausanne (06/24/10) Mitchell, Michael

The behavioral laws of insects have the potential to give robots a greater complexity of movement without the need for high computational power, says the Ecole Polytechnique Federale de Lausanne's Mirko Kovac. He led a group of researchers who created a robot that can perch like an insect or bird, fly head first into an object, such as a tree, and attach itself using sharp prongs. The robot snaps its two spring-loaded arms to create forward momentum and to decelerate, while the glider's arms are fitted with pins to dig into a surface. A remotely controlled mini-motor is used to detract the pins and enable the robot to continue on its way. "We are not blindly imitating nature, but using the same principles to possibly improve on it," says Kovac, who previously developed a quarter-gram jumping robot. He describes jumping, gliding, and perching as a new form of artificial intelligence (AI). Robots would have greater mobility without being bogged down with heavy batteries and Kovac envisions a swarm of robots with such AI capability traveling over rough terrain to aid catastrophe victims.


How Wi-Fi Drains Your Cell Phone
Technology Review (06/24/10) Simonite, Tom

Researchers at the University of Texas at Austin (UTA), University of Wisconsin-Madison, and Microsoft have developed NAPman, a system that modifies the software running on Wi-Fi access points to extend cell phones' battery life. The researchers began by benchmarking how much power different cell phones need to use Wi-Fi. "We found that an HTC Tilt's total power consumption increases by threefold when using Wi-Fi," says UTA's Eric Rozner. NAPman enforces a first-come, first-served approach to all data, whether it is from a device using a power saving mode or not. It also only wakes a phone to retrieve its data when that data is at the front of the queue, preventing the phone from waiting and wasting energy. The system also tracks devices that go to sleep after a fixed time so they are not sent data while asleep. "Not only could we provide 70 percent energy savings compared to the conventional implementation, but NAPman is fair to background traffic," Rozner says. In one test, NAPman doubled the device's battery life from 4.7 to 10 hours.


Enterprise PCs Work While They Sleep, Saving Energy and Money
UCSD News (CA) (06/23/10) Kane, Daniel

University of California, San Diego (UCSD) computer scientists have developed SleepServer, software that enables personal computers to maintain their presence on Voice over IP, instant messaging, and peer-to-peer networks when they are in a low-power sleep mode. The researchers say SleepServer can reduce energy consumption on enterprise PCs by an average of 60 percent. "SleepServer enables enterprise PCs to remain asleep for long periods of time while still maintaining the illusion of network connectivity and seamless availability," says USCD's Yuvraj Agarwal. "Our goal with SleepServer is to help buildings with heavy [information technology loads] reach net-zero energy use--so that these buildings effectively become carbon neutral by generating as much renewable energy as they consume." When a PC goes into low-power mode, SleepServer activates the PC's virtual image, which then masquerades as the physical PC. "Our ability to support stateful applications, which continuously communicate state information or perform data transfer over the network, using stubs, is a major differentiator between SleepServer and other solutions aimed at providing smart power management for idle PCs," Agarwal says.


Smart Computer Learns From Video
PhysOrg.com (06/23/10)

Swiss Federal Institute of Technology Zurich (ETH Zurich) researchers have developed a learning program that can analyze temporal and spatial patterns of moving objects. The software can analyze street scenes from video, map the patterns that characterize the various road users, and establish rules governing the traffic flow. Once the program has "learned" the standard patterns, it can interpret video input in real time. "The hardest part of it was processing the theory behind it," says ETH Zurich researcher Daniel Kuttel. Automatic interpretation of dynamic camera images could help with traffic management control centers that monitor traffic flow, says ETH Zurich professor Vittorio Ferrari. As an extension of the research, the ETH Zurich team will attempt to train the program to recognize visual concepts. For example, the program would be able to search the Internet for specific images and automatically find the correct image, including those that are not labeled correctly.


Smartphone Add-On Will Bring Eye Tests to the Masses
New Scientist (06/23/10) Venkatraman, Vijaysree

Massachusetts Institute of Technology (MIT) researcher Ramesh Raskar has developed the Near-to-Eye Tool for Refractive Assessment (NETRA), a system that enables basic eye tests using a smartphone and a specially designed eye piece. NETRA consists of a viewer that fits over a cell phone's screen, combined with software running on the phone. The phone displays an image on the screen, which the eye piece converts into a virtual three-dimensional (3D) display that tests the user's eyesight. NETRA creates the illusion of 3D by simultaneously presenting different views to different parts of the same eye. The researchers say the system could be particularly useful in remote areas that lack diagnostic devices. "It can be thought of as a thermometer for visual performance," says MIT's Vitor Pamplona. Currently, NETRA needs a phone that can be programmed and has a high-resolution display, but the researchers are working on developing the technology for use on any mobile phone.
View Full Article - May Require Free Registration | Return to Headlines


U.K. Researchers Building 'Fat-Free' Cloud Programming Framework
Computerworld (06/22/10) Kanaracus, Chris

Researchers at Citrix and the universities of Cambridge and Nottingham have developed Mirage, a programming framework aimed at supporting applications that run on cloud infrastructure platforms. Mirage's key design principle "is to treat cloud virtual hardware as a compiler target, and convert high-level language source code directly into kernels that run on it," the researchers write. Applications that use Mirage "exhibit significant performance speedups for [input/output] and memory handling versus the same code running under Linux/Xen," they write. Mirage was developed using OCaml, a popular programming language in academia that is establishing a foothold in the commercial market, says Cambridge's Anil Madhavapeddy.


Data Mining Algorithm Explains Complex Temporal Interactions Among Genes
Virginia Tech News (06/22/10) Trulove, Susan

Researchers at Virginia Tech (VT), New York University (NYU), and the University of Milan have developed Gene Ontology based Algorithmic Logic and Invariant Extractor (GOALIE), a data-mining algorithm that can automatically reveal how biological processes are coordinated in time. GOALIE reconstructs temporal models of cellular processes from gene expression data. The researchers developed and applied the algorithm to time-course gene expression datasets from budding yeast. "A key goal of GOALIE is to be able to computationally integrate data from distinct stress experiments even when the experiments had been conducted independently," says VT professor Naren Ramakrishnan. NYU professor Bud Mishra notes GOALIE also can extract entire formal models that can then be used for posing biological questions and reasoning about hypotheses. The researchers hope the tool can be used to study disease progression, aging, host-pathogen interactions, stress responses, and cell-to-cell communication.


Algorithms Aid Prosthetics Development
The Engineer (United Kingdom) (06/21/10) Wagner, Siobhan

Advanced algorithms could help make the speed and accuracy of clinically viable prosthetic devices more comparable to a healthy human arm. Engineers from Cambridge University have teamed up with neuroscientists at Stanford University to develop intelligent algorithms for decoding neural activity into physical commands. Cambridge professor Zoubin Ghahramani describes neurons as noisy information channels, but notes that neural prosthetic designers are using fairly simple linear methods for decoding activities. "So you get activity from many, many neurons spiking and it is a challenge to infer the desired action and direction of movement," he says. The researchers will bring more advanced machine-learning methods with adaptability to changing electrode recordings. They plan to test the algorithm in neural prosthetic devices implanted in primates before human trials. "The field of neural implants is moving quite rapidly but the idea of having brain signals control previously paralyzed bodies will take a bit longer," Ghahramani says.


'Augmented Reality' on Smartphones Brings Teaching Down to Earth
Chronicle of Higher Education (06/20/10) Li, Sophia

Video games are often criticized for isolating players from reality, but augmented-reality developers see the technology as a way to enhance reality. University of Wisconsin at Madison (UWM) researchers are developing Augmented Reality and Interactive Storytelling (ARIS), an open source tool that lets designers link text, images, video, and audio into a physical location, making the real world into a map of virtual characters and objects that users can navigate with smartphones. ARIS, which was developed by UWM's David J. Gagnon, was built for use by students and educators. Massachusetts Institute of Technology professor Eric D. Klopfer has created two similar tools, and worked with augmented-reality games for nearly a decade. He says cell phones equipped with global positioning systems, cameras, and other features are opening up new avenues for enhancing reality-based games. Klopfer says that place-based learning and augmented reality are a great match for topics at the intersection of science and society, such as public health and environmental issues.


Gauging Safety in the Electronic Age
University of Leicester (06/18/10)

The University of Leicester's Farah Lakhani is studying how techniques from architecture could be used in the development of software for embedded processors, which have grown in complexity. Lakhani says the designs of buildings and control systems share commonalities. "Architects must couple knowledge of engineering--for example what type of steel girder is required to support a floor--with human-centered design, i.e. what makes a building a good place to live or work," she says. Lakhani says that similar concerns should be a focus of developers of embedded systems, and her current research focuses on "how techniques called 'design patterns' from the field of architecture can be used by developers of reliable embedded systems."


How a Computer Program Became Classical Music's Hot, New Composer
Christian Science Monitor (06/17/10) Rocheleau, Matt

University of California, Santa Cruz professor David Cope has developed Emily Howell, a music-composing program that generates its own compositions by following musical rules that Cope has taught it. The program is only fed music composed by an earlier program of Cope's, Experiments in Musical Intelligence. Critics say that Emily's music, while impressive, lacks the ability to trigger emotional reactions in listeners. A 2008 University of Essex study determined that the human brain has a stronger emotional reaction to music played by humans than by machines, even when the listener does not know the source of a performance. Cope says that Emily and other programs capable of artistic creation offer an opportunity for collaboration with human artists, rather than replacing them. "Computers are there [for us] to extend ourselves through them," he says. "It seems so utterly natural to me. It's not like I taught a rock to compose music."


Abstract News © Copyright 2010 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe