Association for Computing Machinery
Welcome to the February 17, 2010 edition of ACM TechNews, providing timely information for IT professionals three times a week.

HEADLINES AT A GLANCE


War Game Reveals U.S. Lacks Cyber-Crisis Skills
Washington Post (02/17/10) P. A3; Nakashima, Ellen

The Bipartisan Policy Center recently staged the Cyber ShockWave, a simulation to demonstrate the plausibility of a cyberattack that could be as crippling as the Sept. 11, 2001, terrorist strikes. "We were trying to tee up specific issues that would be digestible so they would become the building blocks of a broader, more comprehensive cyberstrategy," says former CIA director Michael Hayden. The simulation, in which the cell phones and computers of tens of millions of Americans were turned into weapons to shut down the Internet, had 40 million people in the eastern United States without electrical power and more than 60 million cell phones out of service. Privacy was a key stumbling block in any strategy the participants tried to put forth. "Americans need to know that they should not expect to have their cell phone and other communications be private--not if the government is going to have to take aggressive action to tamp down the threat," says Jamie Gorelick, a former deputy attorney general. Participants also wrangled over how far to go in regulating the private sector, which owns the vast majority of the "critical" infrastructure that is vulnerable to a cyberattack.
View Full Article - May Require Free Registration | Return to Headlines


Graduation Gaps for Science Majors
Inside Higher Ed (02/17/10) Epstein, Jennifer

A recent University of California, Los Angeles (UCLA) survey found that more students are interested in majoring in science and technology fields, but those students are graduating at lower rates than students not pursing science and technology degrees. In 2009, 34.3 percent of white and Asian-American students and 34.1 percent of black, Latino, and Native American students said they planned to major in a science, technology, engineering, and mathematics (STEM) discipline. "It's really positive that we're seeing growth in the percentage of students entering college who are interested in pursuing a STEM major across races," says UCLA professor Mitchell Chang. However, the survey found that less than half of white and Asian-American students who started pursuing STEM majors actually graduated with STEM degrees, and the percentages are even less for blacks, Latinos, and Native Americans. "Something that happens in college--and it goes beyond just preparation--is losing students," Chang says.


U.S., EU, Russia Set Aside $13.6M for Exascale Software Work
Computerworld (02/12/10) Thibodeau, Patrick

The United States, Canada, France, Germany, Japan, Russia, and the United Kingdom have agreed to fund projects aimed at developing software for the next generation of supercomputers. The G8 Research Councils of the participating nations recently began offering $13.6 million for projects that support exascale software development. The G8 specifically listed climate change, energy, water, and the environment as study focus points for the next generation of supercomputers. Computers that contain 250,000 compute cores today are expected to have as many as 100 million cores by 2020. "We're interested at looking at what is needed in terms of standards, in terms of a real software stack for exascale, and we have to start planning now," says the University of Tennessee professor Jack Dongarra. He says developing software for next-generation supercomputers will be extremely challenging. Dongarra and Argonne Leadership Computing's Pete Beckman recently formed the International Exascale Software Project, aimed at developing and coordinating research for exascale systems. The G8 predicts that supercomputers will process 10 petaflops in 2013, 100 petaflops in 2016, and one exaflop in 2019.


Intelligent Traffic Flow
The Engineer (United Kingdom) (02/17/10)

Researchers at De Montfort University Leicester (DMU) and Leicester University (LU) are using artificial intelligence and satellite data to help manage traffic patterns. The researchers will use information from Leicester's Star Trak system, which tracks buses and feeds information on their status to passengers via electronic message boards at bus stops, to analyze traffic flow. "We are linking with the Star Trak scheme and using computational intelligence to make predictions about what the traffic situation will be like in the next half an hour to an hour," says DMU professor David Elizondo. The researchers will use the data to create computer models that test how artificial intelligent systems control traffic lights. In addition, LU will provide earth-observation satellite data and environmental sensors to monitor air quality. "Bringing together air pollution measurements for space and sat-nav technology for intelligent journey planning is truly novel," says LU professor Paul Monks.


A Conference Keen on Finding Open Communication
New York Times (02/16/10) O'Brien, Kevin

Incompatible mobile phone software is threatening to slow the growth of the mobile Internet. Most mobile phone networks use proprietary software that only works on one type of device or with one carrier. The competition is similar to what happened in the early stages of the personal computer industry. To counter this, Alcatel-Lucent recently launched an initiative to connect network operators with software developers to develop mobile applications that work with multiple networks and operating systems. More than 50 operators have expressed interest in the program and universal applications are in development. Meanwhile, China Mobile, Verizon Wireless, Vodafone, and SoftBank Mobile of Japan have established the Joint Innovation Lab (JIL) to develop applications for handsets on their networks. JIL has published a specification for a mobile "widget," a simple type of phone application that displays live updates of limited data, like the current temperature. LG, Samsung, Research in Motion, and Sharp are all making phones using JIL's widget format.


Humanoid Robots to Gain Advanced Social Skills
Wired.co.uk (02/12/10) Cole, Emmet

European robotics researchers are developing humanoid robots that can interact with groups of people in a realistic, anthropomorphic way. The Humanoids with Auditory and Visual Abilities in Populated Spaces (HUMAVIPS) project aims to design algorithms that will enable robots to focus their attention on just one person when surrounded by other people, voices, and background noise. The researchers say that if successful, HUMAVIPS would be another step toward anthropomorphic robot intelligence by mimicking the human technique of combining auditory and visual data to focus attention and eliminate unimportant background noise. Building robots that can filter out background noise and focus on one person is a huge challenge, says Brown University professor Chad Jenkins. "If the robot can pick out speech from the noise in a room, it would be a major breakthrough in humanoid robotics because it means that people would be able to have conversations and interact more naturally with robots," says University of Sheffield professor Noel Sharkey.


Brain-Controlled Cursor Doubles as a Neural Workout
UW News (02/15/10) Hickey, Hannah

University of Washington (UW) researchers have found that watching a computer cursor respond to a person's thoughts prompts brain signals to become stronger than those generated in everyday life. The research suggests that the human brain can quickly learn how to control an external device such as a computer interface or a prosthetic limb. The UW team studied epilepsy patients with electrodes attached to the surface of their brains. Previous research has shown that brain signals are weaker during imagined actions than for actually performing the actions. However, when those imagined brain signals were used to control a cursor on a computer screen, the brain signals became stronger than those used to control real-life movements. "The rapid augmentation of activity during this type of learning bears testimony to the remarkable plasticity of the brain as it learns to control a non-biological device," says UW professor Rajesh Rao.


New Fiber Nanogenerators Could Lead to Electric Clothing
UC Berkeley News (02/12/10) Yang, Sarah

University of California, Berkeley researchers have created energy-scavenging nanofibers that can be woven into clothing and textiles. The nano-sized generators have piezoelectric properties, which enables them to use mechanical stress, stretches, and twists to create electricity. "This technology could eventually lead to wearable smart clothes that can power handheld electronics through ordinary body movements," says Berkeley professor Liwei Lin. The nanofibers are flexible and inexpensive because they are made from organic polyvinylidene fluoride. During testing, the researchers tugged and tweaked the nanofibers and generated electrical outputs ranging from five to 30 millivolts and from 0.5 to three nanoamps. The researchers demonstrated average energy conversion efficiencies of 12.5 percent, with some going as high as 21.8 percent.


China Leads the World in Hacked Computers, McAfee Study Says
Washington Post (02/15/10) P. A3; Nakashima, Ellen

Hackers hijacked more private computers in China in the last quarter of 2009 than in any other country, according to a new McAfee report. About 1.1 million Chinese computers and 1.06 million U.S. computers were infected with malware that turned the compromised systems into "zombies," which are often grouped into botnets that are used to attack Web sites or send spam. McAfee's George Kurtz partly attributes Chinese computers' vulnerability to botnets to the fact that software piracy is rampant in China and computer users frequently have not updated the patches on their machines. Cyber expert Stewart A. Baker wants to see a few leading countries devise "effective national norms aimed at eliminating zombie computers." While experts say the United States is the nation most susceptible to cyberattack, McAfee reports that the U.S. is considered to be the most troubling potential cyberattacker.
View Full Article - May Require Free Registration | Return to Headlines


Hold Vendors Liable for Buggy Software, Security Experts Say
InfoWorld (02/12/10) Vijayan, Jaikumar

Security experts from more than 30 organizations recently called on enterprises to put more pressure on security vendors to ensure secure code development. The group, led by the SANS Institute and Mitre, also released draft language for use in procurement contracts between organizations and software development firms that would leave the development firms liable for software defects. "Nearly every attack is enabled by [programming] mistakes that provide a handhold for attackers," says the SANS Institute's Alan Paller. "The only way programming errors can be eradicated is by making software development organizations legally liable for the errors." SANS and Mitre also released its CWE/SANS Top 25 list of the most common programming errors being made by software developers. According to the list, SQL injection errors, cross-site scripting flaws, and buffer overflow weaknesses are the most common programming errors.


IBM Research, EU Team Up on Chip Design
Computer Business Review (02/11/10)

IBM is participating in the Diamond consortium, a European Union-funded initiative to provide a systematic approach and integrated environment for diagnosing and correcting errors in chips. Localizing and correcting bugs on all abstraction levels will allow for hierarchical diagnosis and correction methods that make use of error sources. The Diamond consortium hopes to cut fault localization and correction efforts in half, reduce design time by 23 percent, and develop new tools and methodologies for tracking errors. "Designing a microelectronic chip is very expensive and the design costs are the greatest threat to continuation of the semiconductor industry's phenomenal growth," says Diamond project coordinator and Tallinna Tehnikaulikool researcher Jaan Raik. "The increasing gap between the complexity of new systems and the productivity of system design methods can only be mitigated by developing new and more competent design methods and tools."


Innovation: We Can't Look After Our Data--What Can?
New Scientist (02/11/10) Simonite, Tom

University of California, Santa Cruz researchers are developing hardware for storage devices designed to look after data that has not been created yet. The plan, called Pergamum, aims to use low-power storage "bricks" that can make one terabyte of data available instantly over the Internet while using just two watts of power and coordinate with other bricks. The bricks are designed to prevent future obsolescence because they can use today's hard disks, as well as flash memory-based bricks, or those containing storage formats that have not been invented yet. Meanwhile, Stanford University researchers are developing software called Self Archiving Legacy Toolkit that can recognize places, names, and other organizing concepts in a user's emails, letters, and research reports.


Robots Will Replace All Workers in 25 Years: Futurist
Network World Canada (02/10/10) Solomon, Howard

Cisco Systems futurist Dave Evans says the future of computing includes robots replacing all workers in 25 years. Evans also predicts that in five years people will create the equivalent of 92 million Libraries of Congress worth of data per year and that artificial brain implants will be available in 20 years. His predictions are based on several assumptions, including the pace of change seen over the last 30 years. "Because of the law of large numbers, things are accelerating at an exponential rate," Evans says. He expects the world's data will increase six times in each of the next two years, and that by 2029 it will cost $100 for 11 petabytes of storage. By 2013, wireless network traffic will reach 400 petabytes per month, compared to 9 petabytes per month today. Around 2021, says Evans, a breakthrough in quantum computing will making "mind-blowingly fast" computers that can perform instantaneous language translation, more accurate face recognition, and networks that can transmit unlimited amounts of data.
View Full Article - May Require Free Registration | Return to Headlines


Google Developing a Translator for Smartphones
PhysOrg.com (02/09/10) Edwards, Lin

Google is developing almost instant speech-to-speech translation technology for use in Android-based smartphones. Google's Franz Och says that future advances in voice recognition and machine translation technology should enable the software to "work reasonably well" in a few years. Och says the program would work much like a human interpreter, analyzing a package of speech to understand the full meaning before producing a translation. He says the program's accuracy will improve the more it is used. The new system will combine Google's existing Web site translation program, which works with 52 languages, and a voice recognition application for smartphones. The Web site translation program is based on a database that was created by crawling Web sites in different languages. Och says that speech translation is more difficult than text translation because people all have different ways of speaking.


Abstract News © Copyright 2010 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]




Unsubscribe
Change your Email Address for TechNews (log into myACM)