ACM TechNews (HTML) Read the TechNews Online at: http://technews.acm.org
ACM TechNews
February 22, 2008

Learn about ACM's 2,500 online courses and 1,100 online books
MemberNet
CareerNews
Unsubscribe

Welcome to the February 22, 2008 edition of ACM TechNews, providing timely information for IT professionals three times a week.


HEADLINES AT A GLANCE:

 

Young Researcher to Receive ACM Award for Internet Performance Measurement Techniques
AScribe Newswire (02/21/08)

University of California, Berkeley associate professor of computer science Vern Paxson will receive ACM's 2007 Grace Murray Hopper Award for outstanding young computer professional of the year for his research on measuring Internet behavior. The award comes with a $35,000 prize funded by Google. Paxson developed techniques that are used by both the research community and Internet operators to assess new communications concepts, improve network performance, and prevent network intrusion. Paxson's Internet measurement research has brought the scientific process to measuring the Internet's behavior and the conditions it operates under, elevating the practice of Internet management to a higher level. Paxson's work has also allowed the research community to evaluate new ideas and technologies and identify problems and priorities that are needed for increased efficiency. Paxson was named an ACM Fellow in 2006 and his 1996 research paper, "End-to-end routing behavior in the Internet," won the first "Test of Time" award from ACM's Special Interest Group on Data Communication (SIGCOMM).
Click Here to View Full Article
to the top


A Method for Stealing Critical Data
New York Times (02/22/08) P. C1; Markoff, John

Princeton University researchers have devised a unique and simple methodology for stealing encrypted information stored on computer hard disks. The technique involves literally freezing the computer's dynamic random access memory chips, which have been demonstrated to retain data for seconds or even minutes after power has been cut off. The data, like the chips, is frozen in place by chilled air, allowing the keys to be read by the researchers using pattern-recognition software. Princeton computer scientist Edward W. Felten wrote in a Web posting that the application of liquid nitrogen to the chips allows them to retain their data for hours without power. He noted that this breakthrough "is pretty serious to the extent people are relying on file protection." The experiment offers clear proof that Trusted Computing hardware does not apparently halt potential intrusions, the researchers said. They said they employed special utilities in the Windows, Macintosh, and Linux operating systems to compromise encrypted data, and reported that they started probing the utilities for vulnerabilities last fall after noting a reference to the persistence of data in memory in a 2005 technical paper authored by Stanford computer scientists. "This is just another example of how things aren't quite what they seem when people tell you things are secure," says SRI International researcher Peter Neumann.
Click Here to View Full Article
to the top


Berkeley Researcher Describes Parallel Path
EE Times (02/21/08) Merritt, Rick

University of California at Berkeley computer science professor David A. Patterson detailed plans for defining a new parallel programming model for mainstream computing at an annual gathering on Feb. 21. Unless researchers successfully define efficient parallel methods, "programming will be so difficult that people will not get any benefit from new hardware," Patterson warned. His plan, which will be followed by a new Parallel Computing Lab that Patterson will helm at Berkeley, will proceed in five steps. The lab has already carried out the selection of researchers in areas that include image recognition, voice recognition, and personal health to define compelling parallel applications, while the next phase involves the identification of common patterns through the study of those applications. An independent group will then devise frameworks and a composing language to help programmers invent and coordinate parallel programming modules, and other groups will characterize new operating system and hardware frameworks to best dovetail with a parallel model along with tools to help programmers optimize applications. Sources said Berkeley was chosen by Intel and Microsoft to work on the parallel programming model challenge through a five-year, $10 million grant, while Patterson said around 12 faculty members and 40 graduate students will be lab participants. Separate panel discussions at the Berkeley gathering focused on the move toward ubiquitous computing, with professor and Inktomi founder Eric Brewer pointing out that "the IT industry we know is focused on one-sixth of the population, but there is a tremendous untapped opportunity in the sheer volume of the people who could be involved."
Click Here to View Full Article
to the top


U.S. Science Funding Hits a Political Wall
HPC Wire (02/22/08) Vol. 17, No. 8, Feldman, Michael

Federal funding for science education and research at the Department of Energy (DOE) Office of Science, the National Science Foundation (NSF), and the National Institute of Standards and Technology (NIST) was supposed to double over the next seven years under the president's American Competitiveness Initiative, which had bipartisan support on Capitol Hill. However, fiscal 2008 appropriations ended up being 70 percent short for NIST, 77 percent short for NSF, and 92 percent short for the DOE because of partisan maneuvering. President Bush's decision to set a cap on 2008 appropriations spurred the Democrat-controlled Congress to challenge him with an appropriations proposal that was $23 billion above the cap, and Bush responded with a veto threat that prompted the Democrats to lower their proposal to $11 billion over the cap; the president and the GOP refused to accept this and forced the Democrats to meet the original budget cap. The result has been a DOE-implemented funding freeze on university research, significant reductions in graduate research fellowships and basic research projects for NSF, and a cutback on scientists and engineers at NIST. Experts generally lament the decline of funding for basic science research, warning that it breeds complacency among the science community and general public, and threatens America's global technology leadership by giving other countries with heavy research and tech work force investments a competitive edge. However, funding for the DOE Office of Science's Advanced Scientific Computing Research program, responsible for developing and managing the high-end computing resources of DOE's R&D initiatives, got a substantial boost compared to 2007.
Click Here to View Full Article
to the top


Securing Cyberspace Among Top Technological Challenges of 21st Century, Panel Says
Network World (02/19/08)

Securing cyberspace was recently named as one of the top 14 technical challenges for this century by a National Academy of Engineering panel. The panel said that electronic computing and communication contain some of the most complex challenges engineering has ever faced, including ensuring the confidentiality and integrity of transmitted information, deterring identity theft, and preventing electronic terrorist attacks that could disable transportation, communication, and power grids. The panel noted that serious breaches of security in financial and military computer systems have already taken place, and that identity theft is a growing problem, but research and development in security systems has not progressed much beyond a strategy amounting to fixing problems and cobbling together security patches after vulnerabilities are discovered. Other great technological challenges the panel identified include advancing health care informatics, improving virtual reality, engineering better medicine, and preventing nuclear terror. The National Science Foundation agrees with the panel and cited cybersecurity as an area it wants the United States to invest more resources in.
Click Here to View Full Article
to the top


Can R&D Still Drive the Next Big Thing?
ThomasNet (02/19/08) Butcher, David R.

Plunkett Research estimates that global spending on research and development topped $1 trillion in 2006, while the latest Global R&D report from Battelle and R&D Magazine projects that worldwide R&D spending in 2008 will exceed 2007 figures by nearly 8 percent, for a total of $1.21 trillion. R&D has primarily been carried out and funded in North America, Europe, and Asia by the 30 developed member nations of the Organization for Economic Cooperation and Development. A Science and Engineering Indicators 2008 report from the National Science Board estimates that R&D spending has climbed rapidly in selected Asian and Latin American economies and elsewhere for almost 10 years, while the most rapid and sustained R&D expansion from 1995 to 2005 has been that of China. University R&D projects are continuing to be fed by government research dollars, while a greater portion of U.S. R&D funding is being channeled into company-owned or outsourced labs abroad. "The outsourcing and offshoring of research and engineering projects are skyrocketing" on the strength of low costs, immense talent pools, and the speed and quality of overseas innovation, especially in India and China, Plunkett says. Japan and the United States currently lead the world in terms of R&D as a percentage of gross domestic project, but both countries' combined R&D portion fell from 56 percent of the global total to 48 percent between 1995 and 2005, according to the NSB report. Despite a record high of approximately $340 billion spent on R&D by the United States in 2006, the NSB report observes a multiyear decline in federal support for basic and applied research, while ACM concluded in December 2007 that Congress is sanctioning R&D spending increases that are not in proportion to the rate of inflation, and including allocations for construction projects that are extraneous to its basic research funding responsibilities.
Click Here to View Full Article
to the top


Stanford Camera Chip Can See in 3D
CNet (02/21/08) Shankland, Stephen

Stanford University researchers have developed a multi-aperture image sensor that is capable of judging the distance of subjects in a snapshot. The sensor can see objects differently than the light detectors used in ordinary digital cameras. Instead of using the entire sensor to capture a single representation of the image, the 3-megapixel sensor prototype breaks the scene into small, slightly overlapping 16x16-pixel patches called subarrays. Each subarray uses its own lens to view the photo subject. After a picture is taken, image-processing software analyzes the minor location differences for the same element in different subarrays to determine the distance between objects. The result is a photograph accompanied by a "depth map" that describes each pixel's red, blue, and green light components and how far away each pixel is. Currently, the researchers do not have a specific file format for the data, but the depth information can be attached to a JPEG file as metadata. The multi-aperture approach could also help reduce noise, or the colored speckles that occur more often in digital photography at higher ISO sensitivity settings. Noise is reduced because the multiple subarrays capture the same views, making it easier to distinguish the true color of an object from off-color noise. Each subarray can also be calibrated to record a specific color, which would reduce the "color crosstalk" of current image sensors. However, the sensor requires about 10 times as much processing power as conventional on-chip image processing, and resolution is lower than the raw number on the overall sensor because the same subject matter is captured by multiple pixels.
Click Here to View Full Article
to the top


A Google Competition, With a Robotic Moon Landing as a Goal
New York Times (02/22/08) P. C3; Stone, Brad

Google is sponsoring the Google Lunar X Prize, a competition with $30 million prizes for the first two teams to land a robotic rover on the moon and send back images and other data. Ten teams have announced their intention to participate in the competition. The teams include universities, open-source engineers, and aerospace startups. One team is led by renowned roboticist William L. Whitaker, a Carnegie Mellon University professor. Other teams include an affiliation of four universities and two major aerospace companies in Italy, and a group that is a loose association of engineers coordinating their efforts online. "This is about developing a new generation of technology that is cheaper, can be used more often, and will enable a new wave of explorers," says X Prize Foundation Chairman Peter H. Diamandis. Google will award $20 million to the first team that lands on the moon, sends a packet of data pack to earth, travels at least 500 meters, and sends a second packet. The second team to accomplish those goals will be awarded $5 million. Bonuses are being offered for completing tasks such as visiting a historic landing site and finding and detecting lunar ice. The prize amounts start to decline if the goals are not accomplished by 2012. "The idea we can help spur the return to the moon and maybe even do it more quickly than some of the national plans is really exciting to me," says Google co-founder Sergey Brin.
Click Here to View Full Article
to the top


Applying Theory at Microsoft
Technology Review (02/21/08) Naone, Erica

Microsoft Research's Jennifer Chayes was recently named the managing director for the new Microsoft Research New England lab, which opens in Cambridge, Mass., in July. Her research has had wide-reaching applications on the Internet, including search, keyword advertising, recommendation systems, and social networks. When asked what has changed in the 11 years she has worked for Microsoft, Chayes recalled a time when a colleague said that their work had moved so much closer to applications and Chayes responded by saying that applications had moved closer to their work. By working in industry instead of academia, Chayes says she has been able to hear about problems in the field much more quickly and that she was able to be one of the first people to model such problems. Chayes recently has been researching how to monetize social networks, focusing on finding a balance between collecting enough data to make appropriate recommendations while protecting peoples' private information. Her lab will work with sociologists, psychologists, and economists in order to gain new insights in how to approach their work. "With all the data that we've got and the kinds of things that we want to do, I think it's really time for mathematicians and computer scientists to start interacting with sociologists and psychologists," she says. "I'm not an expert in what people want. I can just model what people want."
Click Here to View Full Article
to the top


Black Hat Conference: Experts Develop Cybersecurity Recommendations for Next President
InformationWeek (02/20/08) Hoover, J. Nicholas

Members of the recently convened Cyber Commission for the 44th President discussed the goals of the panel during the recent Black Hat security conference in Washington, D.C. The commission will spend the next nine months coming up with recommendations for creating a national cybersecurity policy. "This is one of the central issues for national security and we want to make sure it doesn't go away," says James Lewis, director of the technology and public policy program for the Center for Strategic and International Studies, which supports the panel. The commission is not a government-mandated commission, but its membership, which includes two sitting members of Congress and Jerry Dixon, former executive director of the National Cyber Security Division at the Department of Homeland Security, could give the panel some clout. The panel plans to focus on defining a clear command and control structure for federal cybersecurity, standardizing technology procurement procedures across federal agencies, and determining the research and development agenda.
Click Here to View Full Article
to the top


Learning About Brains From Computers, and Vice Versa
MIT News (02/16/08) Chandler, David

MIT researcher Tomaso Poggio recently combined parallel research efforts that examined how computers and the brain perform various functions. One project researched using complex computational methods to understanding how the brain works, while the other focused on improving the abilities of computers to perform difficult tasks that human brains excel at, such as understanding complex visual images. Poggio says the turning point came last year when he and his team used a model vision system designed to be a theoretical analysis of how certain pathways in the brain work to interpret a series of photographs. Despite the fact that the model was not developed for that purpose, it turned out to be just as good, if not better, than the best existing computer vision systems at recognizing certain types of complex scenes. "This is the first time a model has been able to reproduce human behavior on that kind of task," Poggio says. The experiments involved a task that is easy for people but difficult for computers, such as showing a picture and asking if there are any animals in the scene. The picture was shown for only a fraction of a second so that information would be processed in the visual cortex, one of the brain's most complex areas. Reaching an understanding of how the visual cortex works could be a significant step toward understanding the brain. "Computational models are beginning to provide powerful new insights into the key problems of how the brain works," Poggio says.
Click Here to View Full Article
to the top


Military Aims to Seal Leaky Networks
Investor's Business Daily (02/20/08) P. A5; Riley, Sheila

To minimize future security leaks, military and academic researchers are looking for ways to make computer networks more secure, including techniques for understanding the thought process of network intruders. These efforts have lead to a rapidly emerging field of research called "intrusion prediction," which aims to predict the actions of intruders and minimize the damage if they succeed in infiltrating a network. Researchers are working to predict the actions of hackers by analyzing past breaches. The project involves researchers from the Rochester Institute of Technology, the University at Buffalo, Pennsylvania State University, the U.S. Air Force, and CUBRC, a nonprofit research and development company. Technology developed by the researchers can send alert messages and follow and project actions of multiple attackers simultaneously. Rochester Institute of Technology professor Shanchieh Jay Yang says intrusion prediction is intended to be one part of a larger security system. "We believe this work is quite pioneering," Yang says, but he notes that it is not a cure-all. "Very sophisticated, experienced, and good hackers will always find a way to get into your network," he says. "Networks are built to allow remote access." The Navy is convinced that the strategy is valuable and will help protect important information. "We are all over that stuff," says Jim Granger, a technical director for the U.S. Navy's Cyber Defense Operations Command. "You want to get from the reactive to the proactive to the predictive."
Click Here to View Full Article
to the top


Wizkid Makes Its Debut at the Museum of Modern Art
Ecole Polytechnique Federale de Lausanne (02/19/08) Luy, Florence

The Museum of Modern Art in New York will offer visitors an opportunity to interact with WizKid, a machine that is part computer and part robot and is trained to follow them and their movements. As part of MoMA's Design and the Elastic Mind exhibit, which is scheduled to run from Feb. 24 to May 12, 2008, visitors will interact with Wizkid, which has a screen on a mobile neck that moves about like a head and is capable of asking questions nonverbally by expressing interest, confusion, and pleasure with its body language. It also remembers visitors when they come back in range and continues their conversations. "Wizkid gets us AFK--away from keyboard--and back into the physical world," says Ecole Polytechnique Federale de Lausanne engineer Frederic Kaplan. "Unlike a personal computer, it doesn't force the human to accommodate, and it's fundamentally social and multi-user." Visitors will see themselves on Wizkid's screen surrounded by a "halo" of interactive elements that can be selected by waving their hands. Wizkid has a "creature of habit" function that will learn preferences, and would know that the user wants to listen to some light jazz when he or she returns home. Kaplan developed Wizkid with Martino d'Esposito, an industrial designer who teaches at the University of Art and Design Lausanne.
Click Here to View Full Article
to the top


University of Illinois Develops Free, Easy-to-Use Web Tool Kit for Archivists
University of Illinois at Urbana-Champaign (02/19/08) Lynn, Andrea

University of Illinois, Urbana-Champaign Library archivists have developed Archon, a new, freely-available online collections management program. Archon was designed specifically for archivists with limited access to technological resources and knowledge, says Scott Schwartz, one of Archon's developers and the archivist for music and fine arts at UIUC. Schwartz says the program is adaptable to any institutional setting. "We wanted our application to be particularly useful to small, one-person repositories that have been unable to take full advantage of current tools under development," he says. Assistant university archivist and co-project director Chris Prom says the software makes an archive's holdings far more accessible to its users, and notes that it automatically creates its own searchable Web site. Schwartz says the group's primary goal was to create a tool that can provide immediate public access to information on the various collections of historical documents and records found in archives. "We took a minimalist approach, yet didn't sacrifice the standards of the profession," Schwartz says. "We recognized what people and researchers need to access collections of historical documents preserved in archives, and developed the tools to help put archivists and the public in the driver's seat to meet these important access needs."
Click Here to View Full Article
to the top


Fast-Learning Computer Translates From Four Languages
ICT Results (02/18/08)

METIS II is a European-based machine-translation project that has demonstrated an inexpensive technique for translating documents from Dutch, German, Greek, or Spanish to English. Machine translation currently works best for formal texts in specialized areas with unambiguous vocabulary and limited sentence patterns. The European Union has been supporting research in this field since the large Eurotra project in the 1980s, which used a rules-based approach that taught a computer the rules of syntax and applied them to translate texts from one language to another. However, starting in the early 1990s, a new concept of statistical translation has gained in popularity. Statistical translation replaces rules with statistical methods that are based on a text corpus--a large body of written material, up to tens of millions of words--that is intended to be representative of a language. Parallel corpora contain the same material in two or more languages that the computer uses to compare corpora and learn how words and expressions in one language translate to another. Parallel corpora are expensive and rare and exist only in a very few languages. METIS II researchers are employing statistical machine translation without a parallel corpora resource by using monolingual corpora for the target language. Using a single corpus requires using a dictionary for the vocabulary and a way of understanding syntax. METIS II matches patterns at the "chunk" level by matching phrases or fragments of a sentence instead of the entire sentence, which makes the pattern matching more efficient.
Click Here to View Full Article
to the top


New Research Takes Aim at Oral Cancer
University of Minnesota News (02/19/08)

An interdisciplinary team of researchers at the University of Minnesota is combining medical research and computer science to advance medical technology and fight oral cancer. The goal is to be able to identify proteins that lead to oral cancer and to create a technique that can fight the disease at its earliest stages. The researchers are working on a new three-step process that can detect three to four times more proteins than other methods. As part of the project, University of Minnesota computer science professor John Carlis and doctoral student Getiria Onsongo are using databases and data-modeling techniques to analyze the medical data and extract and visualize the desired information. Meanwhile, public health professor and lead researcher Baolin Wu is using complex statistical models to determine how many proteins, out of the thousands found in human saliva, to pinpoint for the study. Wu is also analyzing data on how the proteins interact with each other that could potentially lead to cancerous cells. Being able to detect signs early signs of oral cancer is critical to increasing the survival rate.
Click Here to View Full Article
to the top


Japan: Google's Real-Life Lab
BusinessWeek (02/14/08) Hall, Kenji

Japan's widespread use of wireless broadband has made the country a sort of unofficial testing lab for Google as it tries to refine mobile search technology. Japan's 100 million cell phone users represent the most diverse group of mobile subscribers. Google tests in a variety of locations, but the Japanese are often the most critical because they are just as likely to use a phone to access the Internet as a PC, and at speeds that rival fixed-line broadband. Japanese carriers have also offered such services for years, and many Web sites in Japan are formatted for cell phones. Google is working with the two top Japanese wireless operators, which have a combined subscriber base of 82 million. "Our fundamental strategy is to take ideas from Japan and apply them to other markets," says Google's Emmanuel Sauquet. Japan's influence is why Gmail users will soon be able to include "emoji," or small animated cartoons and emoticons, in their messages. Google relies on user-experience groups to determine what mobile Web surfers like. Participants are given phones with Internet access and asked to complete simple tasks, either in Google's lab or on the streets of Tokyo. Google also conducts what it calls 1 percent tests, which is when a small portion of users see different layouts, fonts, and other features. The goal is to determine what changes make the service easier to use. For example, Google found that letting users choose a default neighborhood can make searching faster.
Click Here to View Full Article
to the top


So an EECS Prof and an Undergrad Walk Into a Computer Lab...
University of California, Berkeley (02/13/08) Bergman, Barry

University of California, Berkeley professor Ken Goldberg and student Tavi Nathanson are developing Jestor version 4.0, a computer program that rates jokes and gives suggestions based on each user's personal sense of humor. The program was first launched in 1998 and has been upgraded several times. Goldberg says Nathanson joined the project in 2006 and has overhauled the program. Jester sends users suggested jokes by first having the user respond to a series of eight jokes to establish their personal preferences. The program uses collaborative filtering to make joke recommendations. The jokes that other users with similar preferences found humorous are submitted to the user, who then rates those jokes, helping further define the user's preferences. "The computer doesn't know anything about the jokes--every joke is just a black box, the same as it would be for a movie or a film," Goldberg says. Goldberg and Nathanson are also working on a program tentatively called "Donation Dashboard" that will recommend portfolios of nonprofit organizations to potential supporters based on their interests, desired donation amounts, and other preferences.
Click Here to View Full Article
to the top


Looks Familiar
The Engineer (02/11/08)

York University computer vision lecturer William Smith is trying to make the process of reconstructing a 3D image of a face from a single 2D image faster and more accurate. Smith's work combines the advantages of two face-recognition approaches. The first, more sophisticated approach uses a morphable statistical model of facial appearance, while the second uses classical shape-from-shading techniques. The statistical approach uses a model and adjusts the parameters to try to fit the model to an image. Smith says the advantages of this technique are that it works well on real images and can recognize a variety of different faces. One weakness, however, is that the shape it re-creates is completely determined by what the system learned earlier, which means that it is not able to reconstruct atypical features it has not seen before. The shape-from-shading techniques are intended to compensate for this shortcoming. The techniques interpret changes in brightness and darkness to detect changes in the direction of the surface of the face. "The issue of face expression is quite difficult with morphable model approaches--you would need hundreds of examples of every expression," Smith says. "So we are looking at using shading information to recover 3D shapes when they might be in a completely different expression, then we can feed that back into the model so the model will improve and learn to recognize these images."
Click Here to View Full Article
to the top


To submit feedback about ACM TechNews, contact: [email protected]

To be removed from future issues of TechNews, please submit your email address where you are receiving Technews alerts, at:
http://optout.acm.org/listserv_index.cfm?ln=technews

To re-subscribe in the future, enter your email address at:
http://signup.acm.org/listserv_index.cfm?ln=technews

As an alternative, log in at myacm.acm.org with your ACM Web Account username and password, and follow the "Listservs" link to unsubscribe or to change the email where we should send future issues.

to the top

News Abstracts © 2008 Information, Inc.


© 2008 ACM, Inc. All rights reserved. ACM Privacy Policy.