Welcome to the February 19, 2010 edition of ACM TechNews, providing timely information for IT professionals three times a week.
HEADLINES AT A GLANCE
Google PageRank-Like Algorithm Dates Back to 1941
PhysOrg.com (02/19/10) Zyga, Lisa
Iterative ranking methods predate Google's PageRank algorithm for ranking the importance of Web pages by nearly 60 years, according to "PageRank: Stand on the shoulders of giants," a new study by University of Udine computer scientist Massimo Franceschet. He says economist Wassily W. Leontief discussed an iterative method for ranking industries in a 1941 paper, and Leontief would receive the Nobel Prize for economics for his research in this area in 1973. In 1965, sociologist Charles Hubbell published an iterative method for ranking people, and scientists Gabriel Pinski and Francis Narin used a circular approach for ranking journals in 1976. In their own paper, Google's Sergey Brin and Larry Page referenced Cornell University computer scientist Jon Kleinberg, who developed the Hypertext Induced Topic Search algorithm for optimizing Web information retrieval. Google's search engine brings a "popularity contest" style to determining the quality of an item, which has created a debate in academic circles about the evaluation of research papers. "Expert evaluation, the judgment given by peer experts, is intrinsic, subjective, deep, slow and expensive," Franceschet writes. "By contract, network evaluation, the assessment gauged [by] exploiting network topology, is extrinsic, democratic, superficial, fast and low-cost."
Making Computer Science More Enticing
New York Times Online (02/18/10) Quinn, Michelle
Employment at the top 10 Silicon Valley companies declined for Hispanics, blacks, and women for the decade ending in 2005, according to a San Jose Mercury News review of federal data. And after the technology bust in the early 2000s, overall enrollment in computer science programs nationwide fell. Stanford University professor Mehran Sahami responded by revamping the computer science department's curriculum to make it more appealing to students. Stanford consolidated the number of required courses, allowed students to specialize in subfields such as artificial intelligence, and began to count classes such as human computer interaction toward computer science requirements. Enrollment rose 40 percent in the first year, and continued with another 20 percent increase this year. Sahami believes perceptions of the high-tech economy, such as those about the outsourcing trend and the commodification of jobs, impacts enrollment. And he notes that with the overall decline in enrollment, a larger drop occurred among women, who sense the lessening of the community of female computer science students. "The accelerating effect happens and it creates even more isolation," Sahami says.
Dwarf Helicopters, Smart Subs, and Mining Robots to Automate Australia
Computerworld Australia (02/17/10) Pauli, Darren
Australian Research Council Centre of Excellence for Autonomous Systems (CAS) research director Hugh Durrant-Whyte has led the development of robots for use in a variety of industries, including mining, sea exploration, and agriculture. CAS' work is aimed at improving efficiency and safety rather than reducing workforce needs. For example, farmers can use an unmanned dwarf helicopter to seek and destroy two plant species, instead of covering the area with pesticides. Meanwhile, the government, research scientists, and oil firms are using small robot submarines to search for oil and gas fields. Ecologists and gas miners also use robotic submarines to map coral distribution. Durrant-Whyte says that artificial intelligence technology is being developed that will enable the robots to analyze information collected by sensors. He also is designing robot navigation systems with better real-time laser terrain sensors and radar. Another group is testing the use of robotic technology in health care, including an intelligent walking aid and an autonomous wheelchair.
DARPA Looks to Build Real-Life C3PO
Wired News (02/16/10) Drummond, Katie
The U.S. Pentagon is fast-tracking the development of a machine that can translate 20 different foreign languages with 98 percent accuracy through the U.S. Defense Advanced Research Projects Agency's (DARPA's) Robust Automatic Translation of Speech (RATS) program. DARPA's goal is to have RATS extract speech from noisy or degraded signals with 99 percent accuracy at distinguishing spoken words from background noise. The system's language identification component will have a special emphasis on the Arabic, Pashto, Farsi, Urdu, and Dari tongues. Voice recognition technology will be incorporated into the RATS software so that people on a military most-wanted list can be identified. In addition, the software will be capable of automatically detecting specific, preselected key words or phrases. DARPA hopes to have demos that translate 15 languages among 1,000 different speakers, and can recognize 100 words or phrases in Arabic, Pashto, and Farsi, within six months.
New Advance in the Study of Alzheimer's Disease
UAB Barcelona (02/17/10)
A computer model that simulates the protein malfunction in humans suffering from Alzheimer's disease has been developed by researchers at the Universitat Autonoma de Barcelona (UAB) and the University of Stockholm. The simulation supports experimental observations linking Apolipoprotein E (ApoE) to the Amyloid beta molecule, the main cause of the disease. The Amyloid beta molecule weakens the functional structure of the ApoE4 protein. The team created the computer model due to the difficulty of conducting in vitro experiments with the peptide Amyloid beta. The researchers say a three-dimensional model simulating the interaction of the ApoE4 protein and the peptide Amyloid beta should lead to a better understanding of Alzheimer's disease and new ways to fight it.
How to Make the Internet a Lot Faster
Technology Review (02/18/10) Naone, Erica
Google recently announced plans to build an experimental fiber network that would offer gigabit-per-second broadband speeds to up to 500,000 U.S. homes. The speeds proposed by Google are much faster than those offered by commercial U.S. Internet services providers, but some international systems have reached higher speeds. In addition, the Internet2 offers 10-gigabit connections to university researchers. There are many factors beyond raw bandwidth that are involved with delivering very-high-speed connections, says Internet2's Gary Bachula. The Internet2 has been researching different technologies that could help find and resolve the performance issues that occur on high-speed connections. "If we're really going to realize the vision of some of these high-end applications, it does have to go beyond basic raw bandwidth," Bachula says. For example, California Institute of Technology professor Steven Low says that Internet protocols also need updating. He notes, for example, that the transmission control protocol does not work well at gigabit-per-second speeds.
Student Uses Artificial Intelligence to Understand Bee Behavior
University of Exeter (02/16/10)
A computer model of the foraging behavior of bumblebees could be used to determine future policies on genetically modified (GM) crops in the United Kingdom and Europe, says University of Exeter Ph.D. candidate Daniel Chalk. Chalk used artificial intelligence to study the movement of bees between fields. Although there is a concern about cross-pollination between GM and non-GM crops, the simulation suggests that bees are unlikely to affect crops through cross-pollination. "By creating a kind of 'virtual bee' I have been able to show for the first time how bees move over large areas, across and between fields," Chalk says. "My research has shown that bumblebees are very efficient foragers and will only travel long distances if they really need to." The model also could be used for bee conservation, as it could help identify landscapes that support bee activity.
PARC Works on Content-Centric Networking
InfoWorld (02/16/10) Krill, Paul
Palo Alto Research Center (PARC) CEO Mark Bernstein says its researchers are currently working on developing content-centric networking technology. The goal is to be able to have content available in the network with a unique identifier and have users be able to access content wherever it is. The research is being led by former Cisco chief science officer Van Jacobson. Bernstein says Jacobson "came to PARC about three years ago with a vision for overhauling how the Internet operates and moving it from a point-to-point plumbing problem to a more distributed content model." Content-centric networking enables higher performance by more closely associating the need of the individual user with specific content, Bernstein says. "The goal there is to reduce all the overhead that right now is in the headers of messages and content flying around the network, to be able to strip all that out of the packets and really be able to allow more of the content to actually be delivered as opposed to all the overhead," he says.
Carnegie Mellon Joins Open Cirrus Test Bed for Advancing Cloud Computing Research
Carnegie Mellon News (02/15/10) Spice, Byron
Carnegie Mellon University (CMU) has joined Open Cirrus, an open source testbed established by Hewlett-Packard, Intel, and Yahoo! to advance cloud computing education and research. "Having a faculty like this and being able to participate in Open Cirrus will provide us with unprecedented opportunities for research and education," says Randal Bryant, dean of CMU's School of Computer Science. CMU will host an Open Cirrus computing cluster that will be used in conjunction with M45, its existing Yahoo!-provided Hadoop-based computing cluster. The new cluster has 2.4 trillion bytes of memory and almost 900 terabytes of storage. Much of CMU's research will be focused on how to make the cloud computing infrastructure faster, more reliable, and more energy efficient. "This site embodies our commitment to the collaborative, open source research environment that Open Cirrus promotes and to aggressively pursuing cloud computing research on this campus," says CMU professor Greg Ganger.
Cameras of the Future: Heart Researchers Create Revolutionary Photographic Technique
Biotechnology and Biological Sciences Research Council (02/14/10) Mendoza, Nancy
Scientists at the Biotechnology and Biological Sciences Research Council (BBSRC) and the University of Oxford have developed a way of capturing a high-resolution image and a very high-speed video on the same camera chip. The researchers say the tool will transform many forms of detailed scientific imaging and could provide inexpensive access to high-speed video with high-resolution still images from the same camera. They say the technology could have applications for everything from closed-circuit TV to sports photography. The researchers have developed "a really great idea to bring together high-resolution still images and high-speed video footage, at the same time and on the same camera chip," says BBSRC's Peter Kohl. The method works by dividing all of the camera's pixels into groups that are allowed to quickly take their part of the bigger picture in controlled successions and during the time required to take a normal snapshot. The University of Nottingham's Mark Pitter is planning to compress the technology into an all-in-one sensor that can be built into cameras.
After Frustrations in Second Life, Colleges Look to New Virtual Worlds
Chronicle of Higher Education (02/14/10) Young, Jeff
Education through virtual worlds is still appealing to some educators, but their interest has waned due to the limitations of commercial environments such as Second Life. Undaunted, some colleges are constructing their own virtual worlds where they can assert more control. For example, Duke University researchers are leading Open Cobalt, a project to build an education-friendly virtual world system that operates with data stored on people's own computers. Another open source initiative is OpenSimulator, a clone of Second Life. Any institution with a spare server and some additional staff time can use the OpenSimulator software and reshape the virtual environment as it sees fit, or it can rent system access from a company that has established servers with the software. Project founder and Boston College professor Aaron E. Walsh says more than 2,000 educators have set up accounts on the OpenSimulator's Education Grid virtual world.
Boring Conversation? Let Your Computer Listen for You
New Scientist (02/12/10) Barras, Colin
Researchers are developing software that can make conversing with a computer more productive. Existing automatic speech recognition (ASR) technology is unreliable. "State-of-the-art ASR has an error rate of 30 to 35 percent, and that's just annoying," says University of Sheffield, UK's Simon Tucker. The Massachusetts Institute of Technology's Alex Pentland says that even if ASR gets the words right, the results can be unsatisfactory because transcribing speech often makes for awkward reading. Tucker led a research team that developed Catchup, an intelligent ASR system that summarizes what has been said at a meeting. Catchup can identify the important words in an ASR transcript and edit out the unimportant ones. It measures the frequency of a word to calculate its importance and presents the results in audio form. The audio summary preserves some of the social signals embedded in speech, which could be lost in a simple transcription. Meanwhile, Pentland is leading a research effort to develop Meeting Mediator, a device that measures how much time four people participating in an audio conference spend talking.
Researchers Find Huge Weakness in European Payment Cards
IDG News Service (02/12/10) Kirk, Jeremy
University of Cambridge researchers have pinpointed a major flaw in hundreds of millions of European payment cards that could enable criminals with a stolen card to complete transactions by entering any random personal identification number (PIN). Cambridge researchers say that a vulnerability in chip-and-PIN cards' protocol can be exploited to fool a point-of-sale terminal into thinking that it has received the right PIN regardless of the numbers inputted. Although such hacks require sophisticated knowledge of the chip-and-PIN system and some external hardware, "this flaw is really a popper," says Cambridge professor Ross Anderson. A representative of U.K. Payments says that such an exploit is mostly implausible in a day-to-day environment, and that the Cambridge researchers' hack methodology is too "convoluted" for the average fraudster. However, Cambridge's Steven J. Murdoch warns that the actual exploitation process is very simple and has the potential of being carried out using much smaller equipment.
Abstract News © Copyright 2010 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]
Unsubscribe
Change your Email Address for TechNews (log into myACM)
|