Welcome to the July 11, 2012 edition of ACM TechNews, providing timely information for IT professionals three times a week.
ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
White House Official Calls for Broad Regulations on Internet Use
NextGov.com (07/10/12) Eric Katz
White House Office of Science and Technology deputy chief technology officer Daniel Weitzner is calling for a broad and flexible regulatory framework for Internet use, leaving the specifics of implementation to the individual industries. "We think the flexibility of having a broad sense of principles but then tuning them to a particular business context is critical, and provides ... what we think the Internet needs," Weitzner says. He says the U.S. Federal Trade Commission would ensure that industries comply with the broad framework, and the Organization for Economic Cooperation and Development's recent set of proposals could act as guidelines for broad Internet regulation. Weitzner says the basis of these guidelines should fit three main principles. The first involves the large scale of the Internet, which indicates that regulatory structures cannot mimic those in other industries. Second, Internet public policy must accommodate and encourage the speed of the rapidly developing Internet medium. Finally, there needs to be international cooperation in regulating the Web, and global standards to fill the void left by a lack of treaties, according to Weitzner. He notes the Obama administration is having extensive dialogues with other countries to encourage their governments to allow Internet freedom.
Artificial Intelligence App Helps Blind People
Computerworld Australia (07/10/12) Stephanie McDonald
Researchers at the University of Auckland and the Auckland University of Technology have developed MobileEye, a smartphone application that uses artificial intelligence (AI) to help blind people make better visual sense of the world around them. The app enables visually impaired users to take a photo of their surroundings and it then verbally describes what it sees to the user. AI is used to detect and analyze colors, text, darkness, and brightness before sending back a verbal description of the photo. "There are many things that computers or artificial intelligence can do, so by using computer algorithms you can get some information about an image," says MobileEye team leader Aakash Polra. When the image cannot be analyzed, it is sent to a human helper, such as friends on Facebook or volunteers, who identify the image and then send verbal descriptions to the user. The app was recently trialed by about 20 users at the Royal New Zealand Foundation of the Blind. "We have been in constant touch with the users and getting their constant feedback and improving the product as we go," Polra says.
A Phone That Knows Where You're Going
Technology Review (07/09/12) David Talbot
University of Birmingham researchers have developed an algorithm that follows a person's mobility patterns and adjusts for anomalies by also analyzing the patterns of people in the user's social network. In a study of 200 volunteers, the system was an average of less than 20 meters off when it predicted where the users would be 24 hours later. However, the average error was 1,000 meters when the same system tried to predict a person's location using only their past movements and not those of their friends, according to Birmingham's Mirco Musolesi. He says the research is noteworthy because it exploits the synchronized rhythm of a city for greater predictive insights. The research project was one of several at Nokia's Mobile Data Challenge. All of the projects drew on the same smartphone dataset from the same 200 volunteers. "It is exciting to see the project flesh out some of the hints and preliminary results we've seen in our earlier projects," says Massachusetts Institute of Technology researcher Alex Pentland. "This field is really moving toward being practical."
Researchers Digitize AIDS Quilt to Make It a Research Tool
Chronicle of Higher Education (07/09/12) Angela Chen
The AIDS Quilt and a tabletop browser were on display at the recent Smithsonian Folklife Festival in Washington, D.C. In 2010 the University of Southern California's Anne Balsamo launched an effort to digitize the quilt in an attempt to make the memorial more accessible via photos. The quilt consists of 48,000 panels sewn into blocks of eight panels each, and a digital version enables people to easily find blocks and zoom in and out of different areas of a project that covers more than 1.3 million square feet. Users can currently search only by name, but a grant from the U.S. National Endowment of the Humanities could enable the NAMES Project Foundation to expand the digital database of the quilt to make it searchable by parameters such as city, birth, and death dates. Crowdsourcing could be used to make the database searchable by individual panel instead of block, relying on volunteers to identity features such as materials, images, and dates to make it a comprehensive research tool. "We are stewards of culture, and we want this to push not only the humanities and digital humanities, but computer scientists, hard sciences, to create this collaborative project," Balsamo says.
'Most Realistic' Robot Legs Developed
BBC News (07/05/12)
University of Arizona researchers say they have developed the most biologically accurate robotic legs. The researchers replicated the central pattern generator (CPG), a neuronal network in the lumbar region of the spinal cord that generates rhythmic muscle signals. The CPG produces and then controls the rhythmic muscle signals by gathering information from different parts of the body involved in walking and responding to the environment, which enables people to walk without thinking. "This robot represents a complete physical, or neurorobotic model of the system, demonstrating the usefulness of this type of robotics research for investigating the neuropsychological processes underlying walking in humans and animals," the researchers say. "We were able to produce a walking gait, without balance, which mimicked human walking with only a simple half-center controlling the hips and a set of reflex responses controlling the lower limb," says Arizona researcher Theresa Klein. The robot could help researchers understand how babies learn to walk and treat people with spinal injuries. The robotic model is noteworthy because it mimics human movement as well as the underlying control mechanisms of that movement, notes the Royal National Orthopedic Hospital's Matt Thornton.
Driver Cellphone Blocking Technology Could Save Lives
PhysOrg.com (07/05/12)
Technology based on radio frequency identification could prevent people from using their cell phones while driving. Anna University of Technology's Abdul Shabeer and colleagues have designed a system that is capable of detecting whether a driver is using a cell phone while the vehicle is in motion. The system can block phone signals by using a low-range mobile jammer that ensures other passengers might use their phones unhindered. Moreover, the system could potentially report violations of local laws and provide the vehicle registration number to traffic police. An alternative to integrating the system with police traffic monitoring could be to provide alerts to other passengers in a vehicle when the driver is attempting to use a cell phone. The researchers believe the technology could help reduce road traffic deaths and have a positive impact on driving overall. "Dialing and holding a phone while steering can be an immediate physical hazard, but the actual conversations also distract a driver's attention," the researchers say.
New Chip Captures Power From Multiple Sources
MIT News (07/09/12) David L. Chandler
Massachusetts Institute of Technology (MIT) researchers have developed a chip that could simultaneously harness ambient light, heat, and vibrations, optimizing battery-free power delivery. "The key here is the circuit that efficiently combines many sources of energy into one," says MIT professor Anantha Chandrakasan. Combining the power from variable sources requires a sophisticated control system. Most efforts to harness multiple energy sources so far have just switched among them, utilizing whichever one is generating the most energy at a given moment. The new method involves extracting power from all the sources by switching rapidly between them, according to MIT's Saurav Bandyopadhyay. "At one particular instant, energy is extracted from one source by our chip, but the energy from other sources is stored in capacitors" and later picked up, so none goes to waste, Bandyopadhyay says. The system relies on a dual-path architecture equipped with a sensor that can either be powered from a storage device or directly from the source, completely bypassing the storage system.
Robotic Assistance for the Elderly
Europe's Newsroom (07/05/12)
A group of European Union-funded researchers has developed a robotic companion and intelligent home environment that could help elderly people live more independent lives. "Without support, assistance, and cognitive stimulation, sufferers from elderly dementia and depression can deteriorate rapidly; in these circumstances their care[takers] ... face a more demanding task; the elderly and their care[takers] both face an increased risk of social exclusion," says University of Reading professor Atta Badii. The researchers created Hector, a robotic companion, as part of the Integrated Cognitive Assistive and Domotic Companion Robotic Systems for Ability & Security (CompanionAble) project. Hector can help users socialize and provide cognitive stimulation in their daily lives. The robot also can control smart systems around the house, opening and closing windows, turning on and off lights, and regulating central heating. "We can extend Hector's capabilities in a modular fashion, making him a plug-and-perform companion robot; he can be effective in a large variety of home settings to support assisted independent living," Badii says. The CompanionAble project has already placed systems in several demonstration homes, which are used to test and improve the functionalities developed by the project's partners through long-term studies.
Music to My Eyes: Device Converting Images Into Music Helps Individuals Without Vision Reach for Objects in Space
IOS Press (07/05/12) Daphne Watrin
Hebrew University of Jerusalem researchers have developed EyeMusic, a sensory substitution device (SSD) that uses musical tones and scales to help the visually impaired "see" using music. The non-invasive SSD converts images into a combination of musical notes, known as soundscapes. EyeMusic scans an image and represents pixels at high vertical locations as high-pitched musical notes and low vertical locations as low-pitched notes, according to a musical scale that will sound pleasant in several different combinations. The image is scanned continuously from left to right and an auditory cue is used to mark the beginning of the scan. A pixel's horizontal location is indicated by the timing of the musical notes relative to the cue, and the brightness is encoded by the loudness of the sound. EyeMusic uses different musical instruments for different colors. "We demonstrated in this study that the EyeMusic, which employs pleasant musical scales to convey visual information, can be used after a short training period [in some cases, less than half an hour] to guide movements, similar to movements guided visually," says Hebrew University of Jerusalem researchers Shelly Levy-Tzedek and Amir Amedi.
Reimer Perfects New Techniques for Spintronics and Quantum Computing
University of California, Berkeley (07/05/12)
Researchers the University of California, Berkeley and the City College of New York are developing techniques to overcome the physical limitations of computer chips in order to create the next generation of faster and smaller electronic devices. The researchers are using lasers to control the fundamental nuclear spin properties of semiconductor materials to speed the creation of spintronic devices that use electrons' spin state to control memory and logic circuits. "Our laser techniques can allow quantum computing to become far more practical and inexpensive," says Berkeley's Jeff Reimer. He notes that spintronics enables computer chips to operate more quickly and with less power. "Now we want to use this knowledge to develop better spintronic devices," Reimer says. The researchers have found they can use circularly polarized laser beams to control spin states in gallium arsenide. "By tuning the laser to just the right intensity and frequency, and by picking the isotopes of gallium and arsenic we use for the semiconductor material, we can control the spins in the semiconductor by using polarized laser light," Reimer says.
Smart Headlights Make Rain and Snow 'Disappear'
Wired.co.uk (07/04/12) Liat Clark
Carnegie Mellon University researchers have developed a car headlight system that detects raindrops and snowflakes and then "dis-illuminates" them as they fall by adjusting the light beams, making it easier for the driver to see the road. The system works by lighting the raindrops for a few milliseconds with a digital projector so a camera can capture a set of images, which are then processed with an algorithm that analyzes their location to predict where the raindrops will fall. The headlights then emit a binary light pattern, with black for where the system does not want to illuminate the rain and white to illuminate the space around the rain. The process takes 13 milliseconds, according to Carnegie Mellon researcher Srinivasa Narasimhan. "Demonstration of the prototype system with an artificial rain drop generator is encouraging, making the falling rain disappear in front of the observer," the researchers note. During testing, the researchers found that the system would be more effective when driving slower, with 70 percent of the rain disappearing at 30 kilometers per hour, compared to just 20 percent of the rain disappearing at 100 kilometers per hour.
Software for Managing Microcredits for Higher Education Students
Technical University of Madrid (Spain) (07/03/12) Eduardo Martinez
Technical University of Madrid researchers have developed Uburyo, software for managing microcredits awarded to university students in developing countries. The software includes a grant manager and an employment office, which are supervised by an international committee that assures the credit award and repayment system is transparent and trustworthy. Universities can use the system to administer grants in order to grow financially and provide underprivileged students with access to higher education. Attempts at funding higher education in developing countries from development cooperation funds or donations have been held up by problematic funding mechanisms and the loss of invested capital because the loaned money could not be retrieved. In order for the system to work, the university must award grant holders paid technical jobs. The educational microcredits system does away with aid dependence, and students take responsibility for the grant that they receive. The software orders the applications and selects beneficiaries depending on household means tests and academic record. Uburyo is available for free via the Internet and universities can download the tool and implement their own microcredit-based system.
The Measured Man
The Atlantic (07/12) Vol. 310, No. 1, P. 110 Mark Bowden
California Institute for Telecommunications and Information Technology (Calit2) computer scientist Larry Smarr envisions the development of "a distributed planetary computer of enormous power," one that comprises 1 billion processors and that can generate a working computational simulation of each person's body, within a decade. This model will supply data that software will mine to produce guidance about diet, medication, and other individual health strategies based on real-time bodily readings. "By 2030, there is not going to be that much more to learn [about one's body] ... I mean, you are going to get the wiring diagram, basically," Smarr predicts. He believes this system will allow constant monitoring of one's bodily functions and genome decryption so that incipient disease, or even the genetic tendency for disease, can be identified and addressed with designed treatments. Smarr thinks this could lead to a patient-centric, computer-assisted healthcare model in which individuals will understand their own bodies and take charge of their well-being, while physicians will simply aid them with the maintenance and fine-tuning. A prototype system at Calit2 uses Smarr's own bodily imagery to create a navigable, high-definition simulation of his torso produced by a graphics supercomputer.
Abstract News © Copyright 2012 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe
|