Association for Computing Machinery
Timely Topics for IT Professionals

About ACM TechNews

ACM TechNews is published every week on Monday, Wednesday, and Friday.


ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either AutoChoice Advisor or ACM. To send comments, please write to technews@hq.acm.org.
Volume 7, Issue 840:  September 12, 2005

  • "Forecasting Katrina"
    Federal Computer Week (09/12/05); Sternstein, Aliya

    Researchers are employing a diverse array of technologies, including supercomputers, geographic information systems, and modeling programs, to anticipate the impact of hurricanes in terms of structural damage, property damage, and land erosion. The path of Hurricane Katrina was accurately predicted by the National Oceanic and Atmospheric Association (NOAA) with supercomputer models. However, University of Tennessee computer science professor Jack Dongarra says supercomputers cannot generate more accurate hurricane models without the input of physicists, mathematicians, and other "creative people understanding what's happening to the physics of the sea and the land." Meanwhile, Kinetic Analysis founder Chuck Watson says additional satellite observations are needed to increase forecast accuracy. His firm operates a hurricane damage estimation Web site with the University of Central Florida that monitors storms around the world and lists projected damage statistics based on wind models, storm tracks, topography, and other information; a pair of 64-processor Beowulf clusters produce the statistics. Watson says accurately modeling economic losses by evaluating the scope of structural damage is difficult because "for most businesses and for residential damage, all we have is ZIP code-level georeferencing, and little details such as construction type, value, etc., and that introduces some biases and errors." The U.S. Geological Survey is collecting data on Katrina's impact on terrain to gain an understanding of coastal response to storms and the degree of areas' vulnerability so that government officials can mitigate the effects of future storms.
    Click Here to View Full Article

  • "ORNL's Zacharia Aids the Efforts of Open Science"
    HPC Wire (09/09/05) Vol. 14, No. 36

    The Oak Ridge National Laboratory is devoting increasing attention to climate science and the impact computing could have on that field. The lab's Thomas Zacharia cites the increasing body of evidence linking energy consumption with the environment as the reason the Department of Energy lab turned its focus toward climate modeling. The lab has conducted global warming simulations involving high performance computers to arrive at a better understanding of the impact physical, chemical, and biological changes have on the Earth's climate. The lab's climate modeling activities also demand a great capacity for data storage. The decades' worth of data the lab has in storage inform its climate predictions, as the scientists first look backward to determine the actual affects of past climatic events before making future predictions. The lab's Leadership Computing Facility simulated 1,100 years of climate, treating it as a product of atmospheric carbon dioxide and other greenhouse gases. As the lab's storage needs approach a petaflop, Zacharia looks forward to advances in high performance computing leading to more accurate models that offer more precise grid resolution and the introduction of more variables to the climate models. The lab relies on Cray high performance computers, which Zacharia claims have functioned flawlessly.
    Click Here to View Full Article

  • "Robo-Justice"
    Boston Globe (09/11/05); Bennett, Drake

    Scientists and scholars believe the introduction of artificial intelligence software into the practice of law will add transparency, efficiency, and fairness to the legal system, although such measures are perceived as an enhancement rather than a replacement for human lawyers or judges. Northeastern University computer scientist Carole Hafner says many computer researchers are intrigued by the law because they see in it "a model that applies to human decision-making more broadly, that might be applied to understanding how people argue with themselves when they have to make a decision." This interest has already yielded software with practical applications: Australian computer scientists John Zeleznikow and Andrew Stranieri's SplitUp software can accurately calculate the probable results of divorce proceedings, and their GetAid program can assess applicants for legal aid based on various factors. Zeleznikow believes such software can boost the efficiency of legal counsel and unsnarl a lot of red tape. Others think AI-based legal software could stand in for lawyers in certain situations, such as when parties cannot afford legal advice. Hafner notes that programmers are currently interested in designing programs that can enforce as well as draft contracts, while Bar-Ilan University computer scientist Uri Schild has developed a software system that could help judges sentence criminals more fairly by weighing the importance of previous offenses. Capstone Practice Systems President Marc Lauritsen expects the disruption caused by AI's penetration of law to be positive.
    Click Here to View Full Article

  • "Mac Community Must Wake Up to Security"
    ZDNet Australia (09/09/05); Kotadia, Munir

    Many Mac users operate under the false assumption that they are immune to the security threats that plague Windows users, but although Macs are targeted less frequently than Windows-based machines, they nonetheless contain significant vulnerabilities. One problem is the dogmatic faith many members of the Mac community have in the system can undermine an honest discussion of security concerns. There is also the belief that the system is secure by design, when the reality is that Mac users have historically actually enjoyed their security by accident. That could change, too, with the discovery of malware such as Renepo, which can disable firewalls, automatic updates, and system accounting. Sophos' Paul Ducklin said the Renepo threat should serve "as a sanitary reminder that these things are not impossible" to Mac users. The comfort zone many Mac users enjoy is often a product of little more than the relative obscurity of the system, and as its popularity increases, new threats will continue to emerge. Due to the many public security breaches Microsoft's products have endured over the last several years, the company has initiated new testing procedures and has developed a patch system that some analysts believe puts the company a few years ahead of Apple on the security front. Apple maintains it has remained consistently vigilant about security, though Ducklin argues that Renepo poses a significant threat.
    Click Here to View Full Article

  • "Robot Cars Aim to Kick Up Dust"
    San Francisco Chronicle (09/12/05) P. G1; Abate, Tom

    Corporate sponsors are supporting participants in the Defense Advanced Research Projects Agency's Grand Challenge 2005--an Oct. 8 off-road race between autonomous ground vehicles--in hopes of gaining publicity as well as benefiting from spinoff technologies. Volkswagen is supporting Stanford's Grand Challenge entry out of interest in the electronic sensor/software technology the vehicle will employ to detect and avoid obstacles, which could be employed in car-to-car communications and crash avoidance systems. Carnegie Mellon University's Red Team has Silicon Graphics in its corner, and Red Team leader William Whittaker recently noted that his team "is actually using the Grand Challenge to build the algorithms to take the technology out of the race world and port that into the farms and the mines." Another Red Team sponsor is Trimble Navigation, which believes the technologies employed in the race can be applied to farming via automated tractors. Silicon Valley entrepreneur David Hall's Grand Challenge effort is receiving sponsorship from Texas Instruments, whose semiconductors are being employed in Hall's Grand Challenge entry. Texas Instruments' Gene Frantz says autonomous vehicle systems are a current focus of the "lunatic fringe" represented by people such as Hall. He says experiments by the lunatic fringe have led to devices in everyday use that generated huge markets.
    Click Here to View Full Article

  • "Big Debate Over Small Packets"
    SecurityFocus (09/07/05); Lemos, Robert

    Fernando Gont is submitting his research identifying vulnerabilities in the transmission of network-data packets to the Internet Engineering Task Force (IETF) after receiving a muted response from the corporate community. In assessing network connections, Gont developed four changes to the structure and handling of information that he feels could significantly improve security. Much of Gont's research centers on the absence of a standardized control that checks for corrupted data in Internet control message protocol (ICMP) packets. Although transmission of problematic data can compromise bandwidth or terminate a connection, the U.S. Computer Emergency Readiness Team (CERT) reports that few commercial vendors have made vulnerability statements about their products, as many do not consider the opportunities corrupted data present to potential hackers to be serious. Gont highlighted three "blind" attacks that could disrupt host-client connections: In the first, an attacker could reset a random TCP connection; the second threat permits an attacker to compromise TCP throughput to the point where only one packet could be sent at a time; and in the third, an attacker could overload the processor by reducing the connection's throughput. Though the attacks could disrupt Web applications, their most serious threat is to the unseen infrastructure that powers the Internet, such as the border gateway patrol. The tepid reception to Gont's findings seems to be echoed by the IETF: The group's Mark Allman said of Gont's findings, "This has been so publicized that if there was a large-scale danger, someone would have exploited it and caused large-scale problems by now." Indeed, Gont has posted three attack tools on his Web site, but no apparent infrastructure attack has occurred.
    Click Here to View Full Article

  • "Ontologies for E-Business"
    IST Results (09/12/05)

    The open-source IST-funded OBELIX project combines computer science, artificial intelligence, economics, systems theory, and business practice to yield an ontology-based e-business system that provides integration and compatibility capabilities that are intelligent as well as scalable, according to project coordinator Inaki Laresgoiti. The OBELIX system employs ontologies to obtain richer profiles of products and services and interconnect their relationships, and represents the initial move toward the automation of e-business services in a Semantic Web setting. The project studied a trio of business scenarios that the OBELIX system was applied to: Online event organization, energy e-trading, and digital music rights management. Laresgoiti says OBELIX could automate such business services, which have traditionally suffered from a scarcity of Internet support. In the digital music rights management case study, the OBELIX system is being used by the Dutch performance rights organization SENA to analyze models to better guard artists' work against piracy and illicit broadcasting.
    Click Here to View Full Article

  • "Duke Phones Installed With Voice Recognition"
    Chronicle (Duke University) (09/09/05); Joshi, Shivam

    Computer scientists at Duke University are in the middle of a 60-day trial phase involving a call forwarding system in which callers can dial a number, say a name, and reach any faculty and staff member. Computer science professor Alan Biermann, research associate Ashley McKenzie, and computer science graduate student Bryce Inouye have developed error-correction technology, and have combined it with a voice-recognition system to create the call-forwarding application. The technology makes use of a Nuance speech recognizer, in case the system does not recognize a name, and asks the caller to spell out the name. "If the speech recognizer fails, then our system uses a complex statistical method to compute the most probable name," explains Biermann. The system has received good reviews, and its effectiveness and demand will determine if the service is brought back after the trial ends on Sept. 15.
    Click Here to View Full Article

  • "Self-Tuning Resource Aware Specialization for Prolog"
    University of Southampton (ECS) (09/07/05); Craig, Stephen-John; Leuschel, Michael

    The authors present a self-tuning, resource-aware offline specialization method for Prolog programs, and experimentation with this technique shows that the annotations of offline partial evaluation can serve as the foundation of a genetic algorithm. Assessing the annotations' fitness is possible through trial and error by employing a series of representative sample queries on certain target Prolog systems and hardware, and accounting for execution time, code size, and other properties. This strategy adds resource awareness and makes the approach tunable to new hardware or Prolog systems, and facilitates the mutation of annotations by toggling the individual annotations of clauses or predicates. Converting unsafe mutations into safe mutations involves the employment of fully automatic binding-time analysis (BTA), which also functions as a legitimate starting point for the genetic algorithm. The self-tuning algorithms can find improved specialized code in terms of speedup, code size, or both. The algorithms can also circumvent the hazards of ordinary partial evaluation, such as work duplication and loss of indexing.
    Click Here to View Full Article

  • "NASA Announces Software of the Year Award Winners"
    SpaceRef.com (09/06/05)

    The Autonomous Sciencecraft Experiment (ASE) software used on the Earth Observing One (EO-1) Mission and the Land Information System (LIS) software used for several orbiting satellites have won NASA's Software of the Year Award. NASA's Jet Propulsion Laboratory developed the ASE software, which enables a spacecraft to automatically detect and track major events such as volcanic eruptions, floods, and wild fires, without waiting on a command from ground operations. The LIS software is a high-powered land surface modeling and data assimilation system that is able to predict water and energy cycles, leading to observation-driven modeling that can transform weather and climate forecasting systems. LIS, developed at NASA's Goddard Space Flight Center, has been used for the Gravity Recovery and Climate Experiment (GRACE), the Tropical Rainfall Measurement Mission (TRMM), and the Aqua satellites. "This software is a great asset, as NASA pursues the Vision for Exploration," says Gregory Robinson, acting chief engineer at NASA. "As we return to the moon and on to Mars, these types of software will aid us in our exploration and scientific discoveries."
    Click Here to View Full Article

  • "Google Hacking"
    Network World (09/05/05) Vol. 22, No. 35, P. 1; McMillan, Robert

    The practice of Google hacking--the penetration of computer networks through Google search queries--owes its start to Computer Sciences researcher and author Johnny Long, who created the Google Hacking Database initially as a joke. The database now serves as a repository for about 1,500 queries, while the Google hacking community is composed of approximately 60,000 members. The search engine is used to not only to unearth credit card numbers, passwords, and unguarded Web interfaces to Web sites, routers, and other things, but also to perform hacker reconnaissance. "Nowadays, pretty much any hacking incident most likely begins with Google," says F-Secure chief research officer Mikko Hypponen. One method is for a hacker to await a security bulletin and then employ Google to find Web sites that use the vulnerable software. Google's database can also be employed to map out computer networks and thwart network administrators' attempts to hinder eavesdroppers. Long reasons that Google's greater involvement in the security community could present new business opportunities. Google could, for instance, create a Google Security Alerts system that notifies customers when their Web sites harbor bugs discovered by Long and other Google hackers.
    Click Here to View Full Article

  • "They've Got Diplomas, But Do They Have Skills?"
    SD Times (09/01/05) No. 133, P. 32; Schindler, Esther

    Professors argue that software companies often have unrealistic expectations of how a college education should prepare graduates for software development. Their argument goes that college provides the theoretical or foundational underpinnings for graduates' careers, and it is their work experience that determines how skillful they become in their chosen field. Furthermore, fieldwork is not a major prerequisite for educators in the traditional university system. Most four-year programs currently focus on turning out computer science grads who are prepared to move into any part of the field or go on to research-oriented graduate programs, but University of Portland computer science professor Karen Ward thinks such an approach may be impractical. She says generalist education is giving real-world skills more prominence, and notes that "We're seeing more schools offering 'tracks' of upper-division electives that allow students to gain some additional depth in one part of the field at the expense of others, and we're seeing more students turn to non-thesis masters degrees for additional specialization." Companies can better prepare grads for programming jobs by participating in the educational process. The first step is to offer internships that can give students more realistic expectations of what the job entails. Outreach strategies such as college lectures by company experts, collaborative curriculum development, and faculty support through contributions of money, equipment, and software can also yield significant benefits.
    Click Here to View Full Article

  • "ID Revolution--Prepare to Meet the New You"
    New Scientist (09/10/05) Vol. 187, No. 2516, P. 26; Biever, Celeste

    Biometrics technologies are soon expected to become the primary form of personal identification, but this revolutionary development carries risks as well as rewards. For example, fingerprint scanners can eliminate PC users' need to memorize passwords and enable people without social security numbers to bank safely, but they can be fooled by duplicated or "spoofed" fingerprints and may not record a clear print from fingers that are dirty, bruised, cut, callused, or sweaty. Even more problematic is comparing prints to existing scans to confirm identity, as the massive volume of print databases can reduce the system's accuracy. Iris scanning, in which a near-infrared image of the eye is captured and converted into code by software algorithms, is generally expected to be more reliable than fingerprint scanning; however, subtle factors--the subject's distance from the scanner, drooping eyelids, blinking during scanning--can cause the technology to generate an unacceptable number of false negatives. Mark Nixon of the University of Southampton believes face-recognition technology will always be the biometric technology with the broadest appeal, but deployments analyzed by the ACLU showed its vulnerability to false positives and false negatives. Improvements are being made, while the National Institute of Standards and Technology will start assessing 19 face recognition technologies in October as part of its "Face Recognition Grand Challenge." A multi-modal solution that combines these various biometric ID technologies into a "super ID" is expected to ultimately emerge, but formidable technical and social challenges must be resolved first.
    Click Here to View Full Article
    (Access to the full article is available to paid subscribers only.)

  • "What's Next for Nanotechnology"
    Futurist (08/05) Vol. 39, No. 4, P. 28; Hall, J. Storrs

    Molecular Engineering Research Institute Fellow J. Storrs Hall splits the development of nanotechnology into five stages, and comments that we are presently in the first stage of nanoscale science and technology, where atomic-scale structures can be imaged and manipulated in a limited capacity. The second stage, which currently exists only in laboratory research, involves atomically precise components being coaxed into forming planned, complex patterns through the phenomenon of molecular self-assembly. The third stage is to link the molecular components together into functioning machines so that large-tech nanotech systems become practical. In the fourth stage, such systems will be able to fabricate the nano-building blocks from simple molecules, at which point costs will start to decline. The final stage will be characterized by the standardization of the process developed in the fourth stage. Potential nanotech applications Hall foresees include super-strong nanoengineered towers hundreds of miles high, all-senses virtual reality, the conversion of waste products into edible foods, and household synthesizers or "matter printers" that can generate food as well as objects. Hall doubts that nanotech will be abused by evildoers or run amok, as long as it is developed evenly so that there is equity between the mainstream and rogue elements in terms of nanotech capability. Hall believes historians are better equipped than laboratory scientists to make accurate predictions of how technologies are likely to evolve, adding that many scientists fail to realize how many breakthroughs are achievable because they often do not consider other ways a result might be reached beyond their own limited areas of expertise.

  • "Of Modes & Men"
    IEEE Spectrum (08/05) Vol. 42, No. 8, P. 48; Perry, Tekla S.

    Larry Tesler has maintained a consistent presence in the field of computer usability since its inception by contributing to virtually every stage of the user interface's evolution. During his stint at the Stanford Artificial Intelligence Laboratory in the early 1970s, Tesler developed a "document compiler" capable of formatting text and producing footnotes, tables of contents, indexes, and bibliographies, which became a precursor of Dynamic HTML. Tesler's tenure at the Palo Alto Research Center was marked by achievements such as a paper he co-wrote in which he first brought up the concepts of cut-and-paste and the representation of content as icons. Another major accomplishment was the development of Gypsy, a modeless user interface for text editing that featured cut-and-paste, fill-in forms to enter search terms, bold and italic type styles, text selection via mouse, click-to-open files, and what-you-see-is-what-you-get printing. Tesler went from PARC to Apple Computer, where he was extensively involved in the development of Lisa, the first commercially sold PC to use windows, icons, and a mouse. He also worked on the Macintosh project in an unofficial capacity, providing constructive criticism on the Mac user interface that led to far-reaching design decisions, such as the placement of the menu bar on the top and folders and documents on the screen. As vice president of Apple's Advanced Technology Group, Tesler oversaw the initial development of a fully-functional handheld Mac, which he terminated to concentrate on the Newton handheld PC, which was a failure. He later spun off the Cocoa object-oriented programming language into a company where it was developed into software that enabled schoolchildren with no programming skills to easily create simulations and video games, which is still in use today. In May, Tesler joined Yahoo as vice president of user experience and design.
    Click Here to View Full Article

  • "Voice Over Wi-Fi: No Slam Dunk"
    Business Communications Review (08/05) Vol. 35, No. 8, P. 40; Wexler, Joanie

    Voice over IP (VoIP) delivered over Wi-Fi remains a formidable challenge, given the lack of mainstream enterprise deployments. Widespread wireless VoIP adoption will hinge on further Wi-Fi/telephony integration and perhaps mutually advantageous standards to support those IP-PBX calling features that encourage firms to select their PBX supplier. Those standards must also allow E-911 applications to operate from Wi-Fi handsets; fully support emerging applications for presence management; and permit the wireless VoIP network to facilitate call routing and capacity planning optimization. Industry alliances committed to the closer integration of Wi-Fi and telephony systems have begun to emerge. Session Initiation Protocol (SIP) is expected to allow features and handsets to be blended together across the LAN-WAN divide, and Bluesocket's Dave Juitt and SpectraLink's Ben Guderian are convinced that SIP will ultimately make a single handset functional across every communications environment. There appears to be a lack of development of a standard location information interface among standards organizations, and this has spurred vendors to attempt to broaden their own application program interfaces. Trapeze Networks CTO Dan Simone says the IP-PBX must be intelligent enough to interact with mobile handsets properly, in accordance with policy. In addition, "It's pretty important that the information in the wireless infrastructure should be used to improve the performance and quality of calls," says Kamal Anand of Meru Networks.

  • "R&D 2005"
    Technology Review (09/05) Vol. 108, No. 9, P. 50; Talbot, David; Tristram, Claire; Cho, Dan

    IT companies' R&D outlays are generally dragging behind those of the life sciences, according to the 2005 edition of the Technology Review research and development scorecard, but there are indications of healthy investment in long-term "blue sky" research. Examples include Intel's effort to improve biological imaging by scattering laser light off a biological sample, enabling more fine-grained detection of DNA elements that could clear the way for new disease diagnosis methods. IBM, meanwhile, launched a project this year to create a practical 3D model of a neocortical column. The "Blue Brain" project will involve taking raw data collected from rat neurons at the Swiss Federal Institute of Technology and feeding it into an IBM supercomputer so that a 3D simulation of the neurons' interactions can be assembled and compared against laboratory data compiled by neuroscientist Henry Markram. He expects the experiment to yield new insights on the paths information travels, its representation, and its storage in neurons. Lucent Technologies' Bell Labs has shed much of its basic R&D, but a major research focus at the facility is quantum computing, which promises to vastly ramp up the speed at which solutions for certain problems can be calculated by tapping the unique properties of quantum bits (qubits). Bell Labs researchers are using ion traps to harness qubits for quantum computing, and are pursuing a silicon version of a large ion trap array, or multiplex system. Bell Labs' David Bishop believes such a breakthrough would make the basic technologies of quantum computing ready for application, though the majority of researchers in the field do not foresee the emergence of a functional quantum computer for at least another 10 years.
    Click Here to View Full Article