Welcome to the November 17, 2008 edition of ACM TechNews, providing timely information for IT professionals three times a week.
HEADLINES AT A GLANCE
Burned Once, Intel Prepares New Chip Fortified by Constant Tests
The New York Times (11/17/08) P. B3; Markoff, John
Despite rigorous stress testing on dozens of computers, Intel's John Barton is still nervous about the upcoming release of Intel's new Core i7 microprocessor. Even after months of testing, Barton knows that it is impossible to predict exactly how the chip will function once it is installed in thousands of computers running tens of thousands of programs. The new chip, which has 731 million transistors, was designed for use in desktop computers, but the company hopes that it will eventually be used in everything from powerful servers to laptops. The design and testing of an advanced microprocessor is one of the most complex endeavors humans have undertaken. Intel now spends $500 million annually to test its chips before selling them. However, it still is impossible to test more than a fraction of the total number of "states" that the new Core i7 chip can be programmed in. "Now we are hitting systemic complexity," says Synopsys CEO Aart de Geus. "Things that came from different angles that used to be independent have become interdependent." In an effort to produce error-free chips, Intel in the 1990s turned to a group of mathematical theoreticians in the computer science field who had developed advanced techniques for evaluating hardware and software, known as formal methods. In another effort to minimize chip errors, the Core i7 also contains software that can be changed after the microprocessors are shipped, giving Intel the ability to correct flaws after the product's release.
Fighting Traffic Jams With Data
The Wall Street Journal (11/17/08) P. B5; Cheng, Roger
Drivers may soon be able to avoid traffic jams through the use of light-emitting diodes, smart phones, GPS systems, and mobile sensors. University researchers are developing systems that would enable cars to communicate with each other and relay critical information such as traffic speed, weather, and road conditions to other cars and drivers. Such information could be used to find faster routes or provide drivers with live traffic feeds on their cell phones. "The interest has gone red hot in the last year as the auto industry realizes this is a component of improving safety," says Boston University (BU) professor Thomas Little. Massachusetts Institute of Technology professor Hari Balakrishnan is working on the CarTel project, which has been deployed on a fleet of taxis and limousines and uses mobile sensors to record real-time information on the location and speed of the vehicles and the condition of the roads. The sensors send data back to a central computer that calculates the traffic patterns and can predict the optimal route. As part of the CarTel project, Balakrishnan developed a method of connecting to a Wi-Fi network in 400 milliseconds, which is necessary for cars that pass by a network too quickly to connect using conventional methods. BU, the University of New Mexico, and the Rensselaer Polytechnic Institute also are conducting similar research. The joint project is focused on delivering traffic and car information through flashing headlights, brake lights, and traffic signals. Little says that giving cars the ability to communicate with each other will be crucial for managing traffic and avoiding accidents.
Supercomputing Conference Turns 20
HPC Wire (11/16/08)
Patricia J. Teller, general chair of the 20th annual Supercomputing Conference (SC08), cosponsored by ACM, describes high-performance computing as "a catalyst for advancing the state-of-the-art in science, engineering, and other disciplines." This year's conference will include Technology Thrusts that focus specifically on biomedical informatics and energy. Teller says the SC08 Technical Program will reflect the essential role that multicore processors are playing in the shifting nature of supercomputing, and notes that the size and scope of the conference has expanded enormously compared to years past. She says the event also will feature a museum-quality exhibit spotlighting the past two decades of the conference and related technological innovations. "The idea of thrust areas is to showcase scientific and technological advances that have been facilitated by high-performance computing systems and expertise," Teller says. She is a professor at the University of Texas at El Paso, and her current research areas include dynamic adaptation of operating systems and computer architectures, parallel and distributed computing, workload characterization, education, and performance evaluation, modeling, and enhancements, specifically in regard to I/O, checkpoint/restart, and multicore architectures.
Researchers Take a Step Ahead in Quantum Computing
IDG News Service (11/14/08) Shah, Agam
Researchers say they have made a discovery that could lead to a fully functioning quantum computer. The researchers have found a way to preserve electrons, which store data, for a longer period, which would enable a system to process data more coherently and run programs more effectively. Gavin Morley, a researcher at the London Center for Nanotechnology, a joint venture between the University College London and Imperial College London, worked with researchers from several institutions, including the University of Utah. The researchers' new technique uses magnetic states of electrons to store data. Quantum bits run programs by spinning, but sometimes the quality of the electrons degrade, putting the electrons into an undesirable state, called quantum noise, which can cause users to lose control of the program. By applying a certain magnetic field, the researchers used a current to determine the state of an electron without causing disturbance, creating a 5,000 percent longer life than any other similar experiment so far, Morley says. The researchers hope their work will enable them to build a quantum supercomputer within the next 15 to 20 years.
A New Congress, a New Approach to Technology?
CNet (11/13/08) Condon, Stephanie
Although presidential elections capture the attention of the public, the U.S. Congress may be more important in setting the country's technology policies. Congressional Democrats working with the new Obama administration might be more eager to approve legislation that stalled in the 110th Congress, including spyware regulations and a shield law that would protect some bloggers. Some changes in House and Senate committee leadership are expected, including the chairmanship of the Energy and Commerce Committee, which oversees green tech and Internet regulation. The House Judiciary Committee also is expected to reorganize a key subcommittee due to increased interest in intellectual property issues. Other issues that are expected to be addressed next year include Net neutrality, consumer privacy, and issues surrounding regulation of electronic medical records and patent reform. How Congress intends to address tech policy will become clearer once the Democratic caucus decides who will be its committee chairs. Senator John Rockefeller (D-W.Va.) is the logical replacement for the Commerce, Science, and Transportation Committee chairmanship, which will oversee the digital TV transition and has jurisdiction over a variety of technology issues. Rockefeller would probably push forward broadband deployment legislation, which he has tried to do for years, and would encourage public-private partnerships in scientific research.
Free Software Gets an Education
ICT Results (11/12/08)
The European Union-funded SELF project is working to educate teachers, software developers, researchers, IT managers, and citizens on using open source software. SELF researchers have created an online platform for developing and collaboratively distributing education materials on free software. The SELF team hopes that their platform will become a Wikipedia for anyone looking for a better understanding of open source software. The platform allows individuals to contribute and update educational materials on a variety of topics, from simple articles to structured courses on subjects such as the Linux operating system and complex scientific programs. SELF project coordinator Wouter Tebbens says the effort was launched when participants realized that a lack of education resources was holding back the adoption of free software in Europe and other places. "We identified four factors holding back the adoption of free software: Firstly, there is a lack of awareness about what free software is. Secondly, there is a perceived lack of technical support for free software products. Thirdly, teachers and trainers are mostly unprepared to teach it. And, fourthly, they didn't have the necessary resources," Tebbens says. The SELF team believes that providing free access to education materials will give potential users fewer excuses not to learn, teach, and adopt open source software.
Computer Science Outside the Box
Computing Community Consortium (11/12/08) Lazowska, Ed
Computing Community Consortium (CCC) chairman Ed Lazowska attended the "Computer Science Outside the Box" workshop hosted by several organizations, including the National Science Foundation and the CCC. One of the insights he took away from the event was that smart decisions can be aided by a model of computer science's evolution depicted as an ever-expanding sphere. "Even when working inside the sphere, we've got to be looking outward," Lazowska writes. "And at the edges of the sphere, we've got to be embracing others, because that's how we reinvent ourselves." Lazowska observes that most computer science work is driven both by usage concerns and a longing to evolve precepts of lasting value, and he contends that researchers may be ascribing too little worth to research without obvious utility and may also be too hesitant to spurn work that concentrates on applications where it may not be apparent that their own field will move forward. This assumption applies to both the interfaces between computer science and other disciplines as well as the intersections between sub-disciplines of computer science. "We've got to produce students who are comfortable at these interfaces," Lazowska says. A research project must be challenging enough to sustain the participant's interest, yet easy enough to be achievable, he says, and the project must possess a long-term vision and be undertaken in increments.
Extreme Makeover: Computer Science Edition
Stanford University (11/12/08) Stober, Dan
Stanford University artificial intelligence researchers have developed ZunaVision, video-processing software that can remove an image on almost any planar surface in a video, such as a wall, floor, or ceiling, and replace that image with another image or video. ZunaVision could be used to create videos of users singing along with their favorite musician or preview a virtual copy of a painting before buying it. The technology is powered by an algorithm that first analyzes the video, paying special attention to the section of the scene where the new image will be placed. The new image has its color, texture, and lighting subtly altered to blend with the surroundings. Shadows present on the original image will be seen on the new image. The result is a photo or video that appears to be an integral part of the original scene, instead of an image pasted onto the video after it was shot. The algorithm, called 3D Surface Tracker Technology, also must manage occluding objects, such as someone walking in front of the original image. The algorithm handles most occluding objects by keeping track of which pixels belong to the photo and which belong to the person walking in the foreground. Camera motion also must be accounted for, as the geometry of the image may change as the angle and zoom change, which the software handles by building a pixel-by-pixel model of the area of interest in the video.
NASA: Future Space Missions to Rely on Human-Robot Partnership
Computerworld (11/12/08) Gaudin, Sharon
NASA is just beginning to scratch the surface on how robots will play a role in future space missions, according to Carl Walz, director of advanced capabilities at NASA and a former astronaut. The Mars Lander used a robotic arm to collect soil for analysis in the spacecraft. Also, the Mars Rovers Opportunity and Spirit have robotic parts and the International Space Station has a robot. NASA is currently building robots that move on wheels or legs, have workable arms and a cabin to carry human passengers, and is conducting tests in Black Point Lava Flow, Ariz., with hopes of using the robots in space missions. The White House wants manned missions to Mars, but NASA is likely to make use of robots as well. The robots would be able to build a workstation or habitat structure before the astronauts arrive, and create rocket fuel out of gases in the atmosphere so that astronauts have fuel for their return to Earth. "We're planning to develop an outpost and maintain it, and we'll need to have machines to help humans and take on some of the overhead," says Walz. "When you're talking about really long trips, you're going to lead with your robots and follow [with humans] when the propulsion systems and life-support systems catch up."
The Next Step in Health Care: Telemedicine
Rochester Institute of Technology (11/12/08)
A multi-university partnership led by the University of Puerto Rico School of Medicine has tested the use of Internet2 to broadcast live surgeries, and presented its results at a meeting of the collaboration special interest group at the fall 2008 Internet2 member meeting in New Orleans. "This test demonstrates that by using the speed and advanced protocols support provided by the Internet2 network, we have the potential to develop real-time, remote consultation and diagnosis during surgery, taking telemedicine to the next level," says Gurcharan Khanna, a member of the research team and director of research computing at Rochester Institute of Technology. The partnership used Internet2 to broadcast an endoscopic surgery at the University of Puerto Rico to multiple locations in the United States. The research team also used a multipoint videoconference connected to the video stream, which allowed for live interaction between participants. The partnership is now focused on testing different surgical procedures and expanding to remote locations, with hopes of making the technology available for use in medical education and actual diagnostic applications.
Miniaturizing Memory--Taking Data Storage to the Molecular Level
University of Nottingham (11/11/08)
Computers are getting smaller, and as handheld devices become increasingly powerful, there is a need to develop memory formats that can satisfy the growing demand for information storage on small-scale formats. University of Nottingham researchers are exploring ways of using the unique properties of carbon nanotubes to create a cheap and compact memory cell that uses little power and writes information at high speeds. Current memory technologies fit into three categories; dynamic random access memory (DRAM) is the cheapest, and static random access memory (SRAM) is the fastest, but both require an external power supply to retain stored information. Flash memory, which is non-volatile, does not require a power supply to retain data, but it has slower read-write cycles than DRAM. Carbon nanotubes, made from rolled graphite sheets a single carbon atom thick, could lead to a new memory solution. If one nanotube sits inside a slightly larger tube, the inner tube will "float" inside the outer tube as a response to electrostatic, van der Waals, and capillary forces. Passing power through the tubes allows the inner tube to be pushed in and out of the larger tube, which can either connect or disconnect the inner tube to an electrode, creating the 0 or 1 states that are used to store information. When the power is switched off, van der Waals force, which governs attraction between molecules, keeps the inner tube in contact with the electrode, creating a non-volatile memory like Flash.
A Computing Pioneer Has a New Idea
The New York Times (11/17/08) P. B6; Markoff, John
Computing pioneer Steven J. Wallach has developed a new computer designed to solve many kinds of problems through adaptation. His company, Convey Computer, will offer a product that can reconfigure its hardware to a different "personality" to address different challenges, using microprocessors from Intel. Supercomputers are typically designed to tackle one specific class of problems, and are often composed of thousands or even tens of thousands of microprocessors. This constitutes an enormous amount of energy consumption and also comes with frustrating programming challenges. The solution for many new supercomputers faced with solving different classes of problems is to link different kinds of processors together like Legos. Wallach's concept taps field programmable gate arrays, chips that offer both easy reprogrammability and the pure speed of computer hardware. Wallach's design entails the tight coupling of such chips to the microprocessor chip so that it would seem they were a small set of additional instructions to give a programmer a simple way to turbocharge a program. The Convey computer will initially focus on problems for industries that include financial services, oil and gas exploration, computer-aided design, and bioinformatics. Larry Smarr with the California Institute for Telecommunications and Information Technology at the University of California, San Diego, says the Convey computer's energy efficiency is its most important feature.
Future Phones to Read Your Voice, Gestures
Wired News (11/06/08) Ganapati, Priya
In the future, the buttons on cell phones and portable devices may be replaced with features that respond to voice commands and gestures. Such interfaces would enable users to speak to their phones instead of typing, pointing with a finger instead of clicking a button, and gesturing instead of touching. Over the last few years, advances in display technology and processing power have turned smart phones into tiny, yet highly capable computers, enabling phones to support a variety of multimedia and office applications. Experts believe the industry needs a new type of interface technology that will transform how users relate to phones, with traditional keypads and scroll wheels being replaced by haptics, advanced speech recognition, and motion sensors. The large, single touch screens currently available could be replaced by smaller, multiple touch screens that allow a phone to fold closed, like most flip phones today, and users could interact with their phone by simply speaking. In a few years, mobile phones will likely come with embedded micro-projection devices that will enable them to project a screen or a keyboard onto any table or surface so users can navigate using the virtual interface.
Abstract News © Copyright 2008 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: email@example.com
Change your Email Address for TechNews (log into myACM)