Association for Computing Machinery
Welcome to the April 27, 2009 edition of ACM TechNews, providing timely information for IT professionals three times a week.

HEADLINES AT A GLANCE


G.E.'s Breakthrough Can Put 100 DVDs on a Disc
New York Times (04/27/09) P. B4; Lohr, Steve

General Electric (GE) holographic researchers have announced a digital storage breakthrough that will enable standard-sized disks to hold the equivalent of 100 DVDs. Although the advancement is currently only a laboratory success, experts say the technology could lead to a new generation in storage devices with a wide range of uses in commercial, scientific, and consumer markets. Holographic data could potentially store far more data in the same space than conventional optical technology, which is currently used in DVDs and Blu-ray discs. Holographic storage technology had not been on track for mainstream use, but the GE advancement could put the technology on that path. GE researchers took a different approach from previous research, relying on smaller, less complex holograms, a technique known as microholographic storage. A major challenge for the researchers was finding the materials and techniques needed to enable smaller holograms to reflect enough light for their data patterns to be detected and retrieved. The recent advancement led to a 200-fold increase in the reflective power of their holograms, putting the technology at the bottom range of light reflections readable by current Blu-ray machines. GE expects that when holographic discs are introduced, possibly in 2011 or 2012, the technology will cost less than 10 cents a gigabyte.


Researchers Work on Website Credibility Tests
Agence France Presse (04/21/09)

Scientists at the Know-Center, a technology research center in Austria, are developing software capable of quickly determining if a Web site is a credible source with reliable information. The program automatically analyzes and ranks blogs as being of high, average, or little credibility. Blogs are ranked by comparing statistical properties, such as the distribution of certain words over time, with news articles on the same topic from mainstream news sources previously determined to be credible. Know-Center researcher Andreas Juffinger says the program has delivered promising results and appears to be on the right track. Similarly, Japanese researchers are developing a program that mines the Web for different viewpoints on an issue and presents them to Web users, along with supporting evidence as part of a "statement map" that illustrates how different opinions are related. "We really believe that 'statement maps' can help users come to conclusions about the reliability of a Web site," says Nara Institute of Science researcher Koji Murakami. Meanwhile, researchers at Italy's University of Udine are developing an algorithm to assign quality scores to Wikipedia articles and contributors. "Preliminary results demonstrate that the proposed algorithm seems to appropriately identify high and low quality articles," the research team writes in a paper presented at a recent World Wide Web conference in Madrid.


Cornell Maps the World's Photos
Cornell University (04/23/09) Redfern, Paul

Computer science researchers at Cornell University are using a supercomputer to download and analyze almost 35 million photographs, taken by more than 300,000 photographers, posted on the photo-sharing Web site Flickr. The goal was to develop new methods for automatically organizing and labeling large-scale data collections. Cornell developed techniques to automatically identify places that people want to photograph, which included thousands of locations. "We developed classification methods for characterizing these locations from visual, textual, and temporal features," says Cornell professor Daniel Huttenlocher. "These methods reveal that both visual and temporal features improve the ability to estimate the location of a photo compared to using just textual tags." The scalability of Cornell's method could allow the system to be used for automatically mining the data in extremely large sets of images, possibly leading to an online travel guidebook that automatically identifies the best places to visit. The researchers used a mean shift procedure for the data analysis, and ran their program on the Hadoop Cluster, a 480-core Linux-based supercomputer at the Cornell Center for Advanced Computing. Hadoop uses a computation paradigm called Map/Reduce to divide applications into small segments of work, each of which can be executed on any node of the cluster.


CSIRO Telescope Gets Bandwidth Boost, Turns Data Stream Into Torrent
Computerworld Australia (04/21/09) Edwards, Kathryn

Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO) has almost completed an upgrade of its radio telescope. The $12 million project will significantly increase the bandwidth of the Australia Telescope Compact Array from 128 MHz to 2 GHz. "It's like going from dial-up access to the universe to having broadband," says CSIRO Australia Telescope National Facility (ATNF) assistant director Graeme Carrad. He says that 80 times more data will flow from the antennas to the rest of the system. CSIRO chief executive Megan Clark says the Compact Array Broadband Backend project, which makes use of 32 special processing boards, each with 26 layers and 4,000 components, will lead the way in next-generation signal processing. Almost every piece of equipment the signals pass through has been changed except the set of six radio dishes and receivers that perform the initial amplification of cosmic radio signals. The telescope will be four times more sensitive to faint signals, and will more quickly detect cosmic objects traveling at a wider range of velocities, says ATNF's Lewis Ball. "They'll also be able to pick up new features in objects that have never been studied before," Ball says.


I.B.M. Computer Program to Take on 'Jeopardy!'
New York Times (04/27/09) P. A11; Markoff, John

IBM is developing software designed to compete against human "Jeopardy!" contestants, which, if successful, could mark a major advancement in artificial intelligence. Beating "Jeopardy!" will require a program that can analyze an almost infinite range of relationships and make subtle comparisons and interpretations. The creators of the system, which has been named Watson after IBM founder Thomas J. Watson Sr., say they are not confident that it will be able to successfully compete on the show. Human "Jeopardy!" champions generally provide correct responses 85 percent of the time. "The big goal is to get computers to be able to converse in human terms," says IBM artificial intelligence researcher David A. Ferrucci. "And we're not there yet." The researchers say the goal is to develop software that understands human questions and responds to them correctly. In a match that IBM has negotiated with "Jeopardy!" producers, the program will be able to receive questions as an electronic text, while the human contestants will both see and hear the question. The computer will respond with a synthesized voice to answer and choose follow-up categories. To simulate the dimensions of the challenge faced by human contestants, the computer will not be connected to the Internet, but will have to draw from data that was processed and indexed before the show. Carnegie Mellon University computer scientist Eric Nyberg, who is working with IBM on computer systems capable of answering questions not limited to specific topics, says the difficulty is getting the computer to understand the question to be searched.


Wireless Networks Can Now Be Truly Wireless
Swedish Research Council (04/21/09)

Researchers at Karlstad University in Sweden are collaborating with researchers at Deutsche Telekom Laboratories to test technology that promises to maintain the capacity of wireless networks over an entire city or sparsely populated areas. "We are researching entirely wireless connection points, or mesh nodes, that is, the points where users connect their computers to the Internet," says Karlstad professor Andreas Kassler. The new technology would have the mesh nodes communicate with each other, rather than have their own connections to the Internet. The team will test the approach in an experimental environment at Karlstad that will enable each node to use several network cards and communicate on different frequencies simultaneously. As a result, the capacity of the network would not drop. There also are plans to follow up with a test in a real urban environment in Berlin. "Telephone and Internet operators are interested in this technology since it makes it less costly to build networks," Kassler says.


Google Tries Jump-Starting 3D Web With 03D
CNet (04/21/09) Shankland, Stephen

Google has released O3D, software designed to bring accelerated three-dimensional (3D) graphics to Web browsers. The program is a browser plug-in for Internet Explorer, Firefox, Safari, and Chrome. It provides an interface that enables JavaScript programs to directly access a computer's graphics chips, potentially leading to better browser games and applications. Yahoo!, Microsoft, and other companies also are developing 3D Web-based applications. Meanwhile, Firefox and the Khronos Group, which oversees the OpenGL 3D interface standard, have announced an effort to build a 3D Web interface. The Firefox backer Mozilla and Khronos Group takes a different approach. "OpenGL tends to be a lot of code to write, even for something simple, but OpenGL gives you a lot of control," says engineering director Matt Papakipos. "Ours is at a higher level. It takes fewer function calls, so it's easier to get stuff on the screen." Google believes that multiple 3D interfaces may one day be supported in Web browsers, and has been working on the 03D open-source plug in for two years. The plug-in is designed for programmers exploring what is possible with 3D on the Web.


Mobile Drawings Become Context-Aware
Helsinki Institute for Information Technology (04/20/09) Noronen, Visa

Researchers at Finland's Helsinki Institute for Information Technology have developed software that enables artists to copy drawings from a digital paper notebook to the Web. The Atwink mobile application uses Bluetooth to send a drawing using a mobile phone, and it also delivers information on the picture's location, time, and artist. The program is designed to attach global positioning system and other user-sensitive information to the drawing, in addition to information about other pens within Bluetooth range. Users can attach a mobile phone photo to the original drawing and view it via an optimized user interface, manage drawings and mobile photos online for personal desktop use or to share them on social media services, archive drawings, and compare the original drawing to others made in the same location.


The Grill: WikiScanner Creator Virgil Griffith
Computerworld (04/20/09) King, Julia

California Institute of Technology doctoral candidate in computation and neural systems Virgil Griffith created WikiScanner, software that reveals who has made edits to a Wikipedia entry. Griffith says he developed WikiScanner to create minor public disasters for organizations after seeing that congressmen were caught hiring staffers to monitor and clean their Wikipedia pages. Currently, Griffith is working on the Tor2Web project, based on Tor, which enables Internet users to access Web sites anonymously. Tor has a very obscure feature that allows users to host a site within Tor itself. For example, if a user is on Tor and wants to access a Web site that is not on Tor, the feature "anonymizes" the connection. Griffith says the system is the online equivalent of meeting in a dark alley. Griffith also is working on developing conscious machines that can feel, which he says is the reason he is in graduate school. "We are machines that can feel," he says. "Machines that can feel technically are just like us, but better understood." Griffith says he spends most of his time doing theory work on trying to quantify the complexity of neural systems.


University of Utah Researchers Test New Blast Simulation Tool on Ranger
University of Texas at Austin (04/15/09) Dubrow, Aaron

The University of Utah's Center for the Simulation of Accidental Fires and Explosives (C-SAFE), funded by the Department of Energy and the National Science Foundation, is using advanced mechanical, chemical, and physical models to predict how different containers and devices will react when they combust. "The C-SAFE project started in 1997 with the goal of pushing the state-of-the-art in scientific simulation technology," says C-SAFE deputy director Charles Wight. "We wanted to demonstrate how you can bring a group of people together with expertise in various areas--computer science, mechanical engineering, chemical engineering, and chemistry--to make a code that reflects the underlying fundamentals of an energetic device embedded in a fire, and can predict behavior that you can't do experiments for." The C-SAFE project will use the Ranger supercomputer at the University of Texas at Austin's Texas Advanced Computing Center, with the goal of making their research available to the scientific community through TeraGrid, a nationwide network of academic high-performance computing centers. The C-SAFE researchers are using Ranger to create highly resolved simulations, exposing small-scale effects within the combustion process and proving the robustness of the code for large-scale simulations. Explosions are particularly difficult to simulate because they involve several different branches of science, including chemistry, engineering, and physics, and simulations need to model a variety of dimensions and durations.


IBM's Grand Plan to Save the Planet
Fortune (04/23/09) O'Brien, Jeffrey M.

Power outages, disease epidemics, traffic congestion, and other seemingly intractable global problems can be attributed to a dearth of quality information, and IBM CEO Sam Palmisano envisions using smart networks and other technologies to track, store, and analyze such information for the betterment of the planet. "We can solve congestion and pollution," he says. "We can make the grids more efficient. And quite honestly, it creates a big business opportunity." Palmisano shifted IBM's business focus to global consulting and data analytics, and has spent $50 billion on acquisitions and research and development as part of IBM's smarter planet initiative. One example of this initiative was an effort to significantly reduce downtown traffic in Stockholm, Sweden, through a congestion-pricing scheme that would charge vehicles for passing through city limits at various times of day. Government officials were adamant that the system accurately identify more than 99 percent of all vehicles, and IBM deployed overhead cameras using optical character recognition software. IBM also is involved in a CenterPoint Energy project to create a commercially viable self-healing electric grid in order to minimize outages through the use of smart meters, power rerouting algorithms, and other technologies.


Beating the Back-Up Blues
University of Leeds (04/03/09)

Physicists at the University of Leeds and scientists at IBM Research's Zurich lab have gained a better understanding of racetrack memory, which uses spin transfer to switch the magnetism of millions of tiny spaces and push them to a different location along a nanowire. The team imaged a wall between two domains that lies in a notch in the side of the wire using a special electron microscope, and then measured the current needed to blow the wall out of differently shaped notches. The team was attempting to reduce the current, and thus the power, that would be needed to move the information along the wire. "The reason why the hard disk on your computer is likely to break is because it has moving parts which eventually wear out, but the racetrack method of storing information is much more reliable as all the parts are static," says Leeds professor Chris Marrows. Racetrack memory would be 100 times cheaper than flash memory, and would enable computers to boot up almost instantly. The researchers say the components still need better materials, but magnetic racetrack memory could be ready within 10 years.


The iCampus Technology-Enabled Active Learning Project at MIT: An Interview With Phillip Long
Innovate (05/09) Vol. 5, No. 4, Morrison, James L.; Long, Phillip

Former Massachusetts Institute of Technology (MIT) professor Phillip Long discusses the recently concluded iCampus project, a research collaboration between MIT and Microsoft Research focused on developing and implementing technologies to facilitate more effective learning. Driving the project were concerns about low attendance and a high drop-out/failure rate for certain courses. Long cites as an example a physicist's redesign of his Technology-Enabled Active Learning (TEAL) initiative to improve attendance and reduce attrition in his introductory physics class by adopting a more dynamic teaching strategy that includes peer instruction. The TEAL model organizes students into teams, each of which is given three computers, a projector, a ceiling-mounted camera, and a whiteboard so that students can share work that might be valuable to the rest of the class. Through the iCampus program, MIT established the iLabs project to put repeatable and sustainable experiments online using standardized Web services to meet common needs. Long says these needs include a way of authenticating and authorizing access to each experiment, a data storage functionality, and a scheduling service. The iLabs remote laboratory software architecture was developed to embed specific learning objectives into the interface. The long-term continuance of iLabs is the goal of a still-embryonic consortium of educational, governmental, and corporate institutions.
View Full Article - May Require Free Registration | Return to Headlines


Abstract News © Copyright 2009 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Unsubscribe
Change your Email Address for TechNews (log into myACM)