ACM TechNews is published every week on Monday, Wednesday, and Friday.
ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either HP or ACM.
To send comments, please write to email@example.com.
Volume 4, Issue 420: Wednesday, November 6, 2002
- "Is Our Infrastructure Open to Online Terror?"
Investor's Business Daily (11/05/02) P. A6; Howell, Donna
Sept. 11 and the threat of terrorist groups such as al-Qaida have spurred government and industry to push for better security for critical infrastructure, which is relying more and more on electronic control. One particular area of emphasis concerns command-and-control systems such as the Supervisory Control and Data Acquisition (SCADA) system and the Distributed Control System (DCS). Experts warn that a lack of security and control systems specialists as well as a growing dependence on popular technologies and networking are increasing the risk of vulnerability to cyberattack. For example, mi2g CEO D.K. Matai notes that Microsoft Windows products, which most digital attacks try to exploit, are being shipped with some of the latest command-and-control gear. He remarks that no large-scale command-and-control attacks have taken place yet, but government-sponsored simulation has demonstrated that a combination SCADA attack and physical attack could cause a cascade effect that inflicts significant damage. Control system attacks are difficult to accomplish, but incidents of online reconnaissance of critical systems data by groups based in nations suspected of supporting terrorism is sparking concern. In response to such threats, the U.S. government and industry are trying to encourage companies to evaluate and bolster their security through initiatives such as the National Strategy to Secure Cyberspace. The value of having systems that can detect anomalies remotely goes beyond the threat of cyberattacks: For example, a SCADA control system with the proper safeguards in place could have prevented a gas pipeline leak that caused a deadly explosion in Washington three years ago.
- "Tech Money Fueling Campaigns"
CNet (11/05/02); Bowman, Lisa M.; Kawamoto, Dawn
Although the tech industry has made less donations to political campaigns this year than last year, it is contributing much more than it did 10 years ago: The Center for Responsive Politics reports that the tech sector gave $18.2 million to federal campaigns as of September, whereas a decade ago it contributed a mere $5.1 million. Tech interest in politics was fueled in 1996, when a California proposal was put forward that would have allowed shareholders to more easily sue a company, a measure that the tech industry scrambled to defeat. The sector has also become politically active in response to other issues, such as Internet taxation, encryption export restrictions, and H-1B visa clampdowns. "Some companies have been through the political ringer a few times and have come out on the wrong end," notes former congressman and TechNet CEO Rick White. "So tech companies and their shareholders recognize it's important to have their story told to government." He notes that companies whose industries face regulation, such as telecoms, would do well to make campaign contributions, and adds that firms whose industries are not heavily regulated could also make donations in order to net lucrative government contracts. Other factors that could be influencing the current round of tech campaign contributions include the promise of additional business from the new Homeland Security Department, campaign finance reforms that will eliminate soft money donations after Election Day, accounting practice changes for stock options that may be passed by Congress, and Hollywood's lobbying for legislation that would require tech companies to design products with anti-copying technology. The Center for Responsive Politics estimates that the communications and electronics industry made political contributions of $77.2 million in the 21 months ending in September.
- "Dreaming of a Digital Democracy"
Medill News Service (11/05/02); Madigan, Michelle
Internet voting, which would enable citizens to cast their votes electronically anytime and anywhere, would eliminate the need for polling booths, as well as lines and glitches, according to Mike Alvarez of the California Institute of Technology. Many states are gradually moving toward this goal by deploying touch-screen voting machines, while experts estimate that a complete transition to online voting will take between 10 and 15 years. Countries such as Switzerland and Great Britain have implemented or are planning to implement Internet voting systems, while private corporations are giving shareholders the option of voting online as well. In the 2000 general election, 87 votes in four states were cast online by military and government personnel and their families stationed overseas. A committee to study Internet voting will be set up under the election reform law President Bush signed last month, and MIT political scientist Stephen Ansolabehere urges that the committee establish standard security protocols. The security and privacy concerns are considerable, according to computer scientist David Jefferson: Hackers could stymie the voting process with Internet attacks, bogus Web sites, and viruses, while vote buying and selling would be even easier. Bryn Mawr College's Rebecca Mercuri thinks standalone Internet voting is unworkable, and that the addition of a "physical interface," such as a printed ballot, is the best solution. Still others believe that a receipt should be printed following an electronic vote in order to leave a paper trail, although Jefferson counters that this could encourage vote-selling.
- "Text Software Spots Intruders"
Technology Research News (11/06/02); Patch, Kimberly
Computer security researchers at the University of California at Davis are improving on anomaly detection schemes by using text characterization to categorize system calls. System calls are generated when software programs on the computer talk with one another, which happens whenever a user enters commands. By assigning text tags to these system calls and creating a text document for each sequence of commands, computer science professor V. Rao Vemuri says text characterization can distinguish normal users from intruders fairly accurately. This type of protection would be more effective against new attacks that have not been adequately documented yet, unlike current protection schemes that check for viruses based on their activities as logged in virus databases. Additionally, Vemuri says it can be made even faster so that intrusions could be caught in real time. Still, researchers admit more work needs to be done to make the system more accurate, since administrators would likely turn the system off if they received too many alerts in the course of one day. Given a 0.44 percent false-alarm rate, the scheme would produce 23 false alarms each day for a system completing 5,285 sessions in the same time, according to Bennet Yee, an assistant professor of computer science at the University of California at San Diego. Vemuri says that his team is working on ways to improve accuracy, such as running three simultaneous text characterization programs based on different criteria and then flagging only common results.
Click Here to View Full Article
- "SETI@home Yields to Pressure to Curb Cheating"
ZDNet Australia (11/04/02)
In response to complaints of rampant cheating in the SETI@home project, administrators have promised to clamp down on such practices. More than 800 of the project's major contributors signed a petition after participant Max Nealon revealed last week that the number of illegitimate work units being returned to SETI@home has climbed dramatically. Part of this revelation involved the disclosure by SETI@Netherlands team leader Rick Groenewegen that 41 percent of about 8 million work units returned by his team were false, and he added that many teams have cheated since SETI@home was launched three years ago. Complaints of cheating from the SETI@home community were repeatedly ignored by the project's administrators at Berkeley, according to Groenewegen. Nealon stated that he believes most of the work returned by SETI@home's leading contributory sources is the result of cheating. SETI@home director David Anderson promised that the project would do its utmost to investigate suspicions of cheating, and delete the accounts of any user found guilty of tweaking his work unit output. However, he admitted last month that manpower problems would likely be given a higher priority than cheating. Anderson is urging contributors to help make the Berkeley Open Infrastructure for Networked Computing (BOINC) that will succeed SETI@home cheat-proof.
Click Here to View Full Article
- "Hackers Fight China's Internet Curbs"
Wall Street Journal (11/06/02) P. B5F; Hiebert, Murray
Recent Washington legislation aims to battle Internet censorship in China by creating an Office of Global Internet Freedom, which would apportion $100 million over two years to support anti-censorship initiatives. The proposal was introduced by Rep. Christopher Cox (R-Calif.), who touts the Internet as "the most powerful engine for democratization and free exchange of ideas ever invented." China's Communist Party is worried that subversive organizations could use the Internet to undermine the government or expose state secrets to the public. Rand estimated in a recent study that at least 25 people have been arrested in China over the past two years for online activities, while a crackdown on pornographic and "subversive" Web sites was initiated in early 2002 after a deadly incident in a Beijing Internet cafe. However, analysts doubt that the software being developed to help Chinese users bypass Internet blocks would prevent authorities from sniffing out users' activities. Furthermore, one senior U.S. administration official reports that he is "skeptical that any of the measures can be so effective that they can't be counteracted" by Chinese authorities. Nevertheless, hackers are forging ahead with programs designed to fight Chinese Internet censorship: One program is SafeWeb's Triangle Boy, which enables users to access the World Wide Web via an encrypted channel; another program is Dynaweb, which regularly changes its numerical Internet portal address so that the Chinese government will have a hard time identifying subversive sites. James Mulvenon, who co-authored the Rand study, doubts that the Internet will help bring about any significant changes in Chinese political power, at least in the short term.
- "Why Microchips Weigh Over a Kilogram"
Nature Online (11/02/02); Ball, Philip
A team led by Eric Williams of United Nations University in Tokyo has conducted a study concluding that the amount of energy and materials that go into the manufacture of a typical microchip is hundreds of times greater than the final product's mass. Using data collated from a United Nations Environment Program technical report, a Microelectronics and Computer Technology Corporation report, and a anonymous electronics company, Williams and his colleagues have found that 1.6 kilograms of fossil fuel, 72 grams of chemicals, and 32 grams of water are used in the assembly of an average 2-gram silicon chip with 32 MB of RAM. The team estimates that it takes160 times the energy needed to produce raw silicon to refine it into high-grade silicon, and this energy accounts for approximately 50 percent of the chip's total energy consumption. Fuels, solvents, and other secondary materials needed to precisely engineer chip components usually weigh 600 times the mass of the chip. Adding to the problem is the fact that some of the materials used in chip manufacture, such as polychlorinated biphenyls (PCBs), are toxic and unfriendly to the environment.
- "Texas Program Hopes to Fuse Nano and Manufacturing"
Nanotech Planet (11/05/02); Pastore, Michael
The University of Texas system is working to create a coalition of academia and industry that will make Texas a strong player in nanomanufacturing. The Integrated Nano Manufacturing Technology (INMT) program is a new extension of the university's existing Center for Nano- and Molecular Science and Technology (CNM), and will add several new faculty. CNM and INMT director Paul Barbara says Texas already has many semiconductor and telecom companies based there, 19 of which have already signed up to become industry affiliates of the INMT. The companies and the university, along with Rice University in Houston and Sandia National Laboratory, will work together to research techniques and products that are in line with commercial interests. Some of the work already produced in the University of Texas system includes a nanotransfer printing (nTP) method developed by assistant professor Dr. Lynn Loo. He says that the INMT will help him better focus his research on industry needs. His printing method is meant to avoid costly lithography manufacturing processes and instead use stamp printing to mass-produce nanocircuitry for large, flexible displays. Under Dr. Grant Willson, the University of Texas is also researching step-and-flash imprint lithography, which is expected to produce features as small as 10 nanometers. Meanwhile, Dr. Brian Korgel is working on nanomaterials based on silicon and germanium that would be used to create nanowires and nanoparticle electronics. Korgel says, "In just about every type of field you can think of, there's a potential application." The INMT expansion will also add several new faculty and a new course on nanomanufacturing. Barbara says, "We're committed to be a major force to create the resources to make nanomanufacturing a reality."
Click Here to View Full Article
- "How Paper is Becoming Super Smart"
ZDNet AnchorDesk (11/03/02); Houston, Patrick
The Palo Alto Research Center (PARC) is attempting to commercialize an electronic paper product called SmartPaper, which consists of polymer beads 100 microns in diameter that are black on one side and white on the other, suspended in an oily medium and sandwiched between two layers of mylar. Applying voltage to the paper causes the beads, whose black and white sides are oppositely charged, to rotate and form an image. SmartPaper is reflective and does not require backlighting, so images are visible at any angle and in bright sunlight; it is also bistable, so the image does not have to be refreshed repeatedly and battery power is saved. Furthermore, the electronic paper will cost much less than liquid crystal displays (LCDs). In addition, the paper can be folded, rolled up, or curved, which opens up even more potential applications. Gyricon Media, a spinoff of PARC, intends to beta test displays based on SmartPaper in retail stores next year. The displays will be programmed to change using software and a wireless connection to the retailers' SKU databases. Drawbacks to SmartPaper include poor resolution compared to ink jet printers or computer displays, while the speed at which images can be refreshed is too slow to make it useful for word processing or video. SmartPaper is one of several display technologies that manufacturers hope will replace LCDs, at least in handheld devices such as e-books, cell phones, and PDAs.
- "Old Industry Legends Partner for Next-Generation Displays"
InternetNews.com (11/04/02); Pastore, Michael
DuPont Displays, Sarnoff, and Lucent Technologies' Bell Labs will team up to develop flexible organic thin-film transistor (TFT) technology that could be incorporated into the next generation of displays. The project will marry DuPont's expertise with organic light-emitting diode (OLED) displays with Sarnoff's experience in active matrix TFTs and video display systems, while Bell Labs will devise new organic TFT-materials and design processes. "Organic-TFTs...have the potential to transform the industry from a capital-intensive batch process on glass to a much lower cost, higher throughput process compatible with plastic substrates," notes Ray Camisa of Sarnoff. The partners' research will be partly funded by a grant from the National Institute of Standards and Technology's (NIST) Advanced Technology Program (ATP). The technology they develop could significantly cut the cost of display back planes by using plastic instead of silicon, and promote the manufacture of low-cost flexible displays. A DisplaySearch report estimates that mobile phone, PDA, and camera applications will cause OLED sales revenues to skyrocket from $85 million to $3 billion between 2002 and 2007. OLEDs will also account for more than 4 percent of the flat-panel display market by 2007, according to the report. Meanwhile, DuPont forecasts the technology's availability by that time.
- "Dust-Sized Sensors Could Monitor Weather"
United Press International (10/30/02); Burnell, Scott
Researchers are looking at micro-electromechanical systems (MEMS) to act as airborne environmental sensors that could gather real-time data for meteorological and military purposes. By dispersing hundreds of networked, micron-sized sensors from miles up in the atmosphere, scientists could generate a 3D model that is not possible with current radar and satellite technologies. Ensco scientist John Manobianco says that military applications will likely come first, with applications for studying the weather coming in about a decade. Groups from NASA that are involved in planetary exploration are also researching these systems for future missions, he says. However, many obstacles remain, including finding innovative ways to keep components small enough that they can be produced with the same manufacturing process used to make computer chips. Options for power sources include small fuel cells or solar panels, while a GPS system as small as a computer chip would also need to be developed. Furthermore, the devices would have to emit a radio signal strong enough to maintain sensor-to-sensor transmissions in turbulent weather. About 1,000 sensors for systems integration and network communications tests will need to be built in order to proceed with development, according to Manobianco. Johns Hopkins University technology manager Paul Ostdiek says that setting such an ambitious and potentially powerful research goal is a good way to drive science and keep researchers motivated.
- "Speech Technology Loses its Kooky Luster"
EE Times Online (11/01/02); Mokhoff, Nicolas
Speech recognition technology is moving from a niche market to much wider applications, including voice authentication for a new generation of wireless devices; progress is also being made toward the goal of enabling computers to take dictation, although Xuedong Huang of Microsoft's .Net Technologies Group does not expect this target to be reached for about a decade. As handhelds and other devices get smaller and boast more features, traditional input interfaces such as keyboards are becoming cumbersome, so voice applications are seen as a critical means of access. One area of emphasis is the Speech Application Language Tags (Salt) standard, which is being used to build Web-enabled speech applications. Jim Larson of Intel reports that the World Wide Web Consortium (W3C) is planning to embed Salt specs into the next version of VoiceXML 2.0, while Intel and Microsoft announced their intentions to build enabling technologies and a reference design for Web-enabled, Salt-based speech applications at the Intel Communications Summit. Microsoft also intends to make Salt technologies available to enterprise application developers through a partnership it entered with SpeechWorks International. Keypad-free cell phones could be available as soon as 2003, and adding voice recognition to them is expected to benefit the mobile Internet. Such products, for example, could enable users to call up Web pages by mentioning their addresses. Security is another area that speech recognition could positively impact--back in February, Mitsubishi Electric demonstrated the Trium Mondo PDA, which features software that uses voice recognition to verify user identity.
- "Meet the New Silicon Speed Demon"
Business Week Online (11/05/02); Port, Otis
A 15-year development effort bore fruit recently as IBM announced the creation of the world's fastest silicon-based transistor, a device made from silicon germanium (SiGe) that operates at 350 GHz, four times as fast as current leading commercial transistors. The transistor is bipolar, thus lending itself to the processing of real-world analog signals endemic to telecom systems, rather than digital bits. The device is but the herald of a new generation of chips that will be characterized by faster speeds and smaller features--in 2003, circuit lines are expected to shrink from 130 nm to 90 nm, and then to 65 nm two years later. Intel predicts that almost every electronic device could have wireless connectivity thanks to microprocessors and memory chips with built-in radio circuits by 2005. One of the challenges in achieving such systems-on-a-chip is bridging the gap between digital and analog chip design. Silicon used in digital chips is too slow to process high-speed analog signals, while silicon-chip factories would have to get rid of a lot of costly gear in order to manufacture the compound semiconductors typical of analog chips. However, IBM Microelectronics chief technologist Bernard Myerson is confident that SiGe will be able to integrate analog and digital, and become "imperative" for future system-on-a-chip designs. On November 4, IBM announced that it will detail its 350 GHz transistor at the International Electron Devices Meeting (IEDM) in San Francisco in early December.
Click Here to View Full Article
- "Piggybacking Creates Supercomputer"
Edmonton Journal Online (11/05/02); Rusnell, Charles
In an unprecedented move, the University of Alberta has created a virtual supercomputer that links over 1,200 machines across approximately 19 Canadian learning institutions, according to U of A computer science professor Paul Lu, who designed the software component with two graduate students. The supercomputer will run an experiment to solve a molecular chemistry research problem that would take an individual computer six years to answer. Lu says the project's implementation depended on the participation of C3.ca, a university-based nonprofit that focuses on computer resource sharing. The experiment was unable to tap into all the available computing power because networking the computers took longer than expected, explains computer scientist Jonathan Schaeffer. "Our projections are that we will reach about 3.5 years of computing in one day, which is still a fantastic success," he notes. Other academic institutions participating in the experiment include McGill University and the universities of Saskatchewan, New Brunswick, and British Columbia. Canadian university researchers wish to carry out world-class research but lack sufficient computing power, which makes the supercomputer critical. Should the virtual supercomputer's trial run prove successful, Schaeffer hopes that other Canadian scientists will be able to use the network at least three days a month.
Click Here to View Full Article
- "Distributed Computing: Power Grid"
Australian Technology & Business (10/28/02); Withers, Stephen
Grid computing, an offshoot of the distributed computing initiative, is breaking out of academia and expanding into the commercial sector. Notable grid computing efforts include the $53 million TeraGrid, which the U.S. National Science Foundation is financing; the U.S. National Grid, the North Carolina Bioinformatics Grid, and the University of Pennsylvania Grid, which IBM is participating in; the Globus Toolkit and the NASA Information Power Grid, which SGI is heavily involved in; and Australia's Grid and Next Generation Network (GrangeNet), which will provide a high-speed backbone network to serve as a base for grid services delivered to four major metropolitan areas. Argonne National Laboratory's Ian Foster says vendors such as IBM appear eager to use grid computing to allow organizations to share internal heterogeneous resources, while others are eyeing utility computing, although security issues need to be ironed out. In Australia, projects such as the Grid Computing and Distributed Systems (GRIDS) Laboratory's GridBus initiative aim to build a grid economy infrastructure as well as a scheme to manage grid computing accounting and payments. Other applications--data analysis and jet engine design, for instance--are focusing on clustering, and universities and other organizations are switching from building proprietary clusters to relying on vendors. "Grid computing technology allows you to coordinate and manage multiple distributed resources, and clustering supports a collection of resources under a single administrative domain," notes Foster. However, experts such as SGI CEO Bob Bishop take issue with clustering, claiming that administering software across multiple nodes and a lack of a single, large memory space are two major problems. Distra has created a business-oriented distributed computing application by building a Java-based distributed transaction processing platform that features an electronic payment switch, and Distra's Rod Dew says the product can help cut costs for hardware, maintenance, software licensing, and customization.
Click Here to View Full Article
- "The Mobile Home of the 21st Century"
Red Herring (10/30/02); Bruno, Lee
The proliferation and increasing sophistication of wireless and voice networks is transforming living environments, particularly households, and several companies are hoping to capitalize on this transformation by studying how people interact with technology. Hewlett-Packard Labs engineer Gene Becker estimates that about half a terabyte of data is stored within the electronic devices in an average household. HP is one of several firms testing environments that support wireless networks so that users can access information and services; its effort, Cooltown, is a "location-aware" system featuring rooms equipped with beacons that broadcast URLs to handheld devices carried by people passing through. Meanwhile, Philips Electronics researchers have built a house on a campus in the Netherlands outfitted with cameras and microphones that monitor and document interactions between people and new technology as part of its "ambient intelligence" initiative. The company believes that such cheap electronic elements will become commonplace household components as distributed computing becomes available to consumers. Microsoft's EasyLiving project acts as a testbed of the signals that pass between people and computers in "intelligent rooms," while MIT and the Georgia Institute of Technology are pursuing similar goals. The connection of buildings to mobile handheld computers is progressing thanks to innovations in antennae, processors, and power sources. Multiple antenna technologies are being developed by major companies such as Ericsson and startups such as ArrayComm, while Motorola, NEC, and others are working on alternate energy sources such as micro fuel cells.
- "Making Life Better, Fuller, Safer, Longer"
Electronic Design (10/21/02) Vol. 50, No. 22, P. 163
Electronic advancements are likely to have a revolutionary impact on virtually every aspect of life, leading to dramatic improvements in comfort, entertainment, security, and health care. Carbon nanotube-based cathodes currently being employed in handheld X-ray devices for industrial use could be adopted for medical imaging; paralyzed limbs could be monitored and treated via the implantation of ligament strain sensors, while damaged bones could become self-healing with the addition of bone-like nano-fibers; and implantable chips that relay life signs and wearer location will aid professionals caring for mentally disabled or disoriented persons. Microsurgery will be improved with the advent of microrobotic mechanisms, while telemedicine is likely to enjoy a comeback. Home appliances that are more energy-efficient and consume less power will become commonplace, while sensor-laden robots that operate by voice command or at the push of a button will relieve homeowners of many chores. Affordable, theater-quality 3D films and multimedia presentations will become a regular part of home entertainment thanks to miniature digital micro-electromechanical system (MEMS) micromirrors, and the same digital hub that supports such content will be able to control home security systems and radio, TV, cable, and satellite transmissions. Other developments on the horizon include safe, automated cars that will eventually become self-driving and boast hands-free cell-phone usage via Bluetooth wireless technology. Security of individuals and personal property should be bolstered by the development of sensors and global positioning system (GPS)-based technologies, which could be used to thwart kidnappings, recover stolen items, and combat identity theft.
- "Structuring Stray Data"
InfoWorld (10/28/02) Vol. 24, No. 43, P. 1; Moore, Cathleen
Laura Ramos of Giga Information Group estimates that about 80 percent of all corporate data is unstructured, and tapping into this dormant information could give companies a strategic advantage, although the cost and work commitment is considerable. "If you can structure data and consume it [more] effectively, it makes an enormous contribution to the effectiveness of a company," says Ed Sketch, director of Ford Motor Company's Ford Learning Network. "This whole issue of structuring data to make it intelligible and useful is a major issue for companies, and most companies don't realize it yet." Groupware, content management (CM), knowledge management (KM), and taxonomy are just some of the technologies designed to help tackle unstructured data, and Delphi Group predicts that new information retrieval systems will enhance search with meta data creation, content and behavior profiling, and collaboration in order to manage unstructured data. Ramos believes a combination of user pattern recognition and classification and search technologies will be the most effective solution. Ford structures data about the 6.5 million vehicles it sells annually, but Sketch notes that navigating the torrent of information contained in myriad emails, memos, intranets, and documents is a formidable challenge. Verity CTO Prabhakar Raghavn explains that structured and unstructured data management must merge together. Unstructured data management standards such as the W3C XQuery specification will be key to this development.
- "Holograms in Motion"
Technology Review (11/02) Vol. 105, No. 9, P. 48; Freedman, David H.
Initiatives are underway to make advancements in holography, the creation of three-dimensional images whose potential applications range from surgical planning to ultra-realistic video gaming and other forms of entertainment. A team led by Ken Perlin at New York University's Media Research Lab has developed a system that creates a hologram-like viewing experience without being truly holographic. The device includes a monitor with a transparent liquid-crystal display (LCD) that produces a stereoscopic sensation of depth, while cameras and infrared light-emitting diodes (LEDs) are used to track eye movements, allowing the system to alter the monitor's image according to the viewer's position, affording multi-angle perspective. The system suffers from tracking difficulties and artifacts, can be confused by rapid head movements, and cannot reproduce the crisp quality of true holograms. Still, Perlin expects commercial applications to emerge in a few years, given its relatively low computing power requirements; a few years after that, he predicts that mass-market applications such as dynamic window displays will appear. Meanwhile, Stephen Benton of the MIT Media Lab's Spatial Imaging Group is leading development of a truly holographic video system that provides detailed computer-generated holograms that can be viewed at multiple angles. The system involves computer algorithms that calculate the minuscule lines that compose holograms, and then convert them into sound waves; the waves are sent into a series of crystals that distort temporarily, forming the lines of the hologram's diffraction pattern, through which a laser beam passes to transmit the image to a viewscreen. Images can also be "sculpted" with a handheld stylus equipped with haptic technology. The system's drawbacks include enormous number-crunching requirements--the diffraction pattern from a single hologram can take over 1 TB of data.