Association for Computing Machinery
Timely Topics for IT Professionals

About ACM TechNews

ACM TechNews is published every week on Monday, Wednesday, and Friday.


ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either AutoChoice Advisor or ACM. To send comments, please write to technews@hq.acm.org.
Volume 6, Issue 699:  Monday, September 27, 2004

  • "Antipiracy Bill Divides Studios and Tech Companies"
    Wall Street Journal (09/27/04) P. B1; McBride, Sarah

    Silicon Valley is at odds with Hollywood once again in the debate over piracy, which has spawned new antipiracy legislation that would allow movie studios, record companies, and other copyright holders to sue companies that make products that enable piracy. The wide scope that the U.S. Copyright Office has taken toward the Induce Act has not only made makers of peer-to-peer software opponents of the new legislation, but technology firms in general, consumer electronic companies, and financial services firms as well. The Copyright Office's version of the controversial bill would make people who distribute peer-to-peer software liable for copyright violations, as well as people who distribute technology, devices, and components. For example, the maker of CD and DVD burners would be liable if their products were used to make illegal copies of copyrighted material, and Apple Computer could be at fault if music was illegally downloaded on an iPod. Verizon Communications is concerned that it could be liable, as an Internet-access provider. The Consumer Electronic Association wants to scale the bill back to software used primarily for "indiscriminate, mass infringement of copyrighted works," and are commercially viable for such a reason. There is some uncertainty whether the Induce Act or a similar bill has a chance to become law this session, but the presidential candidates have taken notice of the issue. President Bush agrees with John Kerry on strong copyright law, but he does not favor banning peer-to-peer technology because it has legitimate uses; Kerry is concerned that incentives for creating new content are undermined by widespread media dowloading.

  • "Computers Prove Weak at Faces"
    Baltimore Sun (09/27/04); O'Brien, Dennis

    Even after a decade of research, scientists are still far away from producing a facial recognition system that could be used to identify criminals in airports or other public places. The International Biometric Group says the market for facial recognition systems will nonetheless double in the next year, up from $144 million this year. The overall market for biometrics systems is worth $1.2 billion this year. San Jose State University biometrics researcher James Wayman says the systems on the market now are "low-hanging fruit" because they are used in controlled settings, where computers examine close-up images of employees or others who need access and compare those images to high-resolution images on file. The National Institute of Standards and Technology (NIST), meanwhile, wants to see facial recognition systems used in public places where lighting is not always good, features can be covered by glasses or beards, and images are sometimes taken a long distance. NIST recently announced a challenge to biometrics research groups to produce a system that would accurately identify people 98 percent of the time in such circumstances, and has drawn about 46 companies and universities to participate in the challenge. According to tests done earlier this year, NIST says other biometric identification systems such as fingerprinting are still much more accurate than facial recognition, though they are not discreet enough to capture some hard-core criminals. "When you're dealing with a terrorist, you might not have a fingerprint or even a clear photograph. All you might have is a grainy image taken from a distance," says consultant Amanda Goltz. Takeo Kanade, the director of the Robotics Institute at Carnegie Mellon University and a facial recognition researcher, says it could be 10 to 30 years before computers are as effective as humans at facial recognition.
    Click Here to View Full Article
    (Access to this site is free; however, first-time visitors must register.)

  • "When Bot Nets Attack"
    Technology Review (09/24/04); Hellweg, Eric

    Symantec says computers in zombie networks are often hired out to third parties for $100 per hour, on average. The cybersecurity firm also says these networks are growing by 30,000 machines per day, up from about 2,000 new bot machines added daily last year. Hackers cultivate their networks through code that is carefully crafted to avoid detection and distributed via email attachments, viruses, or through IRC and peer-to-peer file-sharing networks. Some high profile cases involving zombie networks include two distributed denial-of-service attacks on SCO Group last year and this year, and an FBI indictment against the CEO of CIT/Foonet, who had hired hackers to launch bot-network attacks on rival ISPs tallying $2 million in damages. Most recently, credit-card-processing firm Authorize.net fell victim to a coordinated attack that left customers without service. SANS Institute Internet Storm Center chief technology officer Johannes Ullrich says bot networks are currently the most problematic issue with the Internet, especially since their code is difficult to detect with anti-virus software. Hackers issue commands only their bot software can understand, and the proliferation of home and small office broadband connections has provided space for these networks to grow. Home and small office users often do not recognize bot activity on their computers, especially since broadband connections provide enough bandwidth for both illicit and legitimate Internet activity. Enterprises are more diligent in protecting against bot intrusion, partly because they have more to lose if infected. ISPs Earthlink, Cox Communications, MSN, United Online, and others have joined together in the Global Infrastructure Alliance for Internet Safety (GIAIS) to develop Internet infrastructure technology standards that will make it more difficult for bot networks to operate.
    Click Here to View Full Article

  • "Open Science Grid Consortium Declares Grid3 a Success"
    Grid Computing Planet (09/21/04); Shread, Paul

    Researchers in the Open Science Grid Consortium declared a nine-month trial of the prototype Grid3 data grid successful at a Harvard University workshop earlier this month. Grid3 integrates the computational muscle of 26 U.S. universities and laboratories to furnish processing power for over 10 research groups conducting experiments in astrophysics, particle physics, computer science, and bioinformatics. Grid3 was enlisted for data analysis by astrophysicists from the Sloan Digital Sky Survey project, while collaborators from Fermilab's proposed BTeV experiment employed the grid to model particle events. Upcoming challenges for Grid3 include analyzing massive amounts of data culled from experiments with the European Particle Physics Laboratory's Large Hadron Collider; the Open Science Grid Consortium was founded to augment Grid3 to meet this challenge by developing it into a production infrastructure capable of larger-scale operations with a greater pool of collaborators and resources. Grid3 is the product of a joint effort between about 30 researchers funded by member universities, the National Science Foundation (NSF), and the Energy Department's Office of Science. The grid uses Virtual Data Toolkit-based core technologies that encompass the NSF middleware software distributions. Scientists can only access Grid3 if they are with the member organizations that supply computing capacity. Meanwhile, the Enterprise Grid Alliance dedicated to the standardization of enterprise grid solutions announced the establishment of its Europe, Middle East, and Africa Regional Steering Committee to promote grid adoption at Global Grid Forum 12, while SAS announced its selection by Texas Tech University as a major enabling vehicle for campus-wide grid computing.
    Click Here to View Full Article

  • "How to Attract the Best Into IT"
    Computing (09/22/04); Watson, James

    A panel of experts convened for Computing's Agenda Setters initiative, in studying the challenge of attracting the most talented people into IT, recommend investments to cultivate better IT leadership, development of more diverse career paths, promotion and advancement of cross-functional skills, a more multidisciplinary approach to education, and, perhaps most importantly, a greater emphasis on IT's exciting, challenging, and relevant aspects. Ernst & Young's Jan Babiak believes firms should not let investments in technology skills overshadow the importance of building stronger leadership skills, while IT leaders should be more farsighted, keeping an eye on future technology challenges in addition to present needs. Panelists agree that organizations should provide IT personnel with a variety of paths for advancement that they can choose according to their personal preferences; CIB Partners' Dana Gordon-Davis says a balance must be struck between immediate demands and future career growth, while London School of Economics research fellow Peter Sommer cautions that firms should not disregard the importance of corporate culture. The panel concurs that the best IT workers need to combine technical expertise with business acumen, but Sabre Travel Network senior VP Richard Adams notes that finding people who can build those skills is a talent currently lacking in most IT leaders. The experts argue that university graduates must expand their horizons by blending computer science with other disciplines, such as life sciences or business, while more people should be prompted to obtain degrees rather than just a general IT overview. A key obstacle to choosing an IT career is the perception of the field as uninteresting and immaterial, and IT leaders must show staff that not only is IT fun, but that it makes a difference. "If people fundamentally understand why their innovation and their enthusiasm matters, then you can attract people and you can retain them," posits BT CTO Matt Bross.
    Click Here to View Full Article

  • "Profs Patrol Cyberspace"
    University of Toronto (09/13/04); Kelly, Karen

    University of Toronto electrical and computer engineering professors David Lie and Ashvin Goel have taken a different approach to providing security to computers. Lie makes use of decoy computers, also known as honey pots, to lure unsuspecting cyber-criminals into hacking into the machines, giving him an opportunity to monitor their actions. With the honey pot, Lie is able to find a trail of clues that enables him to trace back to the point of origin of the cyber-criminal. Goel focuses more on getting systems up to speed again after an attack, and he suggests saving new data and only pinpointing the intrusion could take system administrators about 10 minutes. However, security experts take hours or days to correct problems because they perform a complete undo in removing all of the present day's work and reverting to a snapshot of data stored the previous day. Goel believes technology for selective undos could be available in three months, but does not expect to see automated, self-recovery systems for some time. Lie says PC security issues must be addressed today, before the situation worsens. Lie says, "It's a completely different world today than when computers first came out. You find them in places you wouldn't normally expect them, like cars."
    Click Here to View Full Article

  • "Eavesdropping Call Centre Computers Cut Talk Time"
    New Scientist (09/22/04); Graham-Rowe, Duncan

    IBM researchers are using speech recognition and search technology to make call center representatives more productive. The artificial intelligence system listens to customers' spoken words, picks out keywords, and retrieves relevant information from the call center database so the representative does not have to waste time searching for it themselves. Since call centers often handle business from diverse clients, such as a bank, insurance company, or utility, representatives often have to be familiar with thousands of pages of product data and business rules. IBM project leader Johan Schuurmans says a prototype version of the technology cut call time by 20 percent, but the planned commercial system will be much more powerful. IBM is building the system in conjunction with researchers from the University of Twede in the Netherlands, and plans to deploy it with a Dutch bank in September. Whereas the prototype only recognizes a handful of words and operates as PC client software, the commercial version will contain 1,000 keywords in its list and will run on the network server. Besides speeding calls, the assistance software will remind representatives of disclaimers they are obligated to give for certain products; if certain words are not detected, prompts will remind representatives. Likewise, the system will also increase cross-selling opportunities for representatives by suggesting relevant products or services.
    Click Here to View Full Article

  • "Green Electronics"
    e4Engineering (09/21/04); Halber, Deborah

    Researchers have integrated plants' ability to produce energy from light with solid-state electronics for the first time. The trick was a designer nanomaterial that enables organic proteins to maintain their stability even without the presence of water and salt. Another novel aspect of the photosynthetic solar cell were complex proteins derived from spinach. At just 10nm to 20nm wide, Massachusetts Institute of Technology (MIT) researcher Marc Baldo says the spinach-based protein systems are among the world's smallest electronic circuits. Researchers from MIT, the University of Tennessee, and the U.S. Naval Research Laboratory first ground up spinach and isolated proteins using a centrifuge. Further purified, the proteins were assembled with the help of a thin layer of gold and sandwiched in the semiconductor device. The bottom glass layer was coated with conductive material that carried out generated electrical current. Even with the designer nanomaterial stabilizing the proteins, they only lasted three weeks. In addition, only a small amount of light that was shone on the device was actually absorbed, since most passed completely through. Of the light that was absorbed, about 12 percent was converted to electrical charge, a highly efficient ratio. Plants are very efficient at generating energy considering their size and weight, and the new device capitalizes on their natural ability. The scientists expect to increase the conversion efficiency of the device by adding layers of photosynthetic protein or depositing them on rough or 3D surfaces.
    Click Here to View Full Article

  • "Web Tool May Banish Broken Links"
    BBC News (09/24/04); Twist, Jo

    A team of students from the United Kingdom interning at IBM has filed two patents for a new tool that promises to automatically update broken links on the Web. Existing tools only detect a broken link, while Peridot is able to determine where an original page has gone, which links have changed, and the extent of the change. The Web-based tool, which is being presented this week to top executives and engineers in Amsterdam, is designed to automatically map and store key features of Web pages, and then review Web links to determine when links have changed and replace old information with more timely documents and links. The ability to compare links to Web pages Peridot has already monitored will be beneficial to companies that review their Web sites manually. Web site administrators will be able to review changes made by Peridot, accept them, or ignore them. "Peridot could lead to a world where there are no more broken links," says James Bell, a computer science student at the University of Warwick. The researchers, working in IBM's UK labs in Hursley, say that unlike other programs that look for broken links, Peridot finds more substantial changes and offers various levels of autonomy.
    Click Here to View Full Article

  • "Microsoft Taps European Expertise in Research"
    IDG News Service (09/24/04); Taylor, Simon

    Microsoft's European research operations provide an important aspect of the company's technological innovation, according to Microsoft Research senior vice president Rick Rashid at an innovation fair in Brussels. The company's Cambridge, England, facility was the first research center outside the United States and attracts many leading researchers to work on projects. Rashid says the work done at Cambridge reflects problems and trends in Microsoft's user base, but is not dictated by the company. "Don't bias the front-end of the innovation process," he says. European research draws on regional expertise in mathematics, medical applications, machine learning, systems and networks, and programming languages. In addition, European research is increasingly leveraging a cross-disciplinary approach and entrepreneurial spirit, as evidenced by the sixth iteration of the European Union's research framework program. The European Microsoft Innovation Center in Aachen, Germany, conducts some of this EU-coordinated research in conjunction with other private-sector interests and universities. Ongoing projects include AskIT technology that uses mobile and wireless technology to aid physically disabled users and e-health applications. Microsoft also operates a research center in Denmark that develops small-business solutions and a Dublin, Ireland, center that focuses on country-specific versions of popular Microsoft products. At the Brussels innovation fair, Microsoft showcased a new biometric ID system that was cheap, easy to use, and tamper-proof. An information retrieval system dubbed SIS IQ was also displayed and could be part of the upcoming Longhorn operating system.
    Click Here to View Full Article

  • "Rose-Hulman Conference to Feature World Wide Web Creators"
    Inside Indiana Business (09/23/2004)

    The Rose-Hulman Institute of Technology in Indiana will host a conference about the World Wide Web that will feature as speakers a number of people who participated in developing the online network. Titled "WWW@10: The Dream and the Reality," the conference marks the 10th anniversary of the public having access to the Web, and will include Robert Cailliau, who worked with Tim Berners-Lee in developing the software foundation of the Web, and Ted Nelson, who coined the word hypertext and shaped its development, as featured speakers. Cailliau, head of Intranet public communications at CERN, will give a keynote presentation titled, "Are We All Caught in the Web?" for the second annual Paustenbach Lecture. Nelson will give the keynote address, "The Metaphysics of Structure and the Future of Literature," on the first day of the conference, and will participate in several other panel discussions. Other speakers include French researcher Louis Pouzin, who invented the datagrams that allowed for the quick and inexpensive expansion of the Internet; Paul Kunz, known as America's first Webmaster; Doug Engelbart, inventor of the computer mouse and the graphical user interface used for e-mail; and Jean-Francois Abramatic, former chairman of the World Wide Web Consortium. The impact and future of the Web will be discussed, as will topics like privacy, ethics, Web-based course management, and language support for mobile Web browsers. The conference is scheduled for Sept. 30 through Oct. 2; further information can be found at http://www%2Ewww@10.cs.rose-hulman.edu.
    Click Here to View Full Article

  • "IMSC's Live Immersive Internet"
    USC Viterbi School of Engineering (09/21/04)

    University of Southern California (USC) researchers will use immersive audio and high definition video imagery to stream a performance by the Miro Quartet over the Internet on Sept. 28, during the annual meeting of the Internet 2 organization at the University of Texas at Austin. The nationally-known chamber music group will perform for one audience, and researchers from the Integrated Media Systems Center (IMSC) at USC's Viterbi School of Engineering will deliver the performance in an immersive environment to a second audience in another auditorium. IMSC will project images of the Miro Quarter on four high definition screens (one for each performer), and will use its 10.2-channel immersive audio technology to capture and render audio. The audiences will switch auditoriums at intermission, and at the end of the show participate in a survey on the first-ever live Internet immersive environment. IMSC is the National Science Foundation's engineering research center for multimedia and Internet research.
    Click Here to View Full Article

  • "Data You Can Virtually Touch"
    Economist (09/16/04) Vol. 372, No. 83931, P. S12

    Haptics researchers say the technology is ready for wider commercial deployment with the falling cost of computer processing power and the haptics hardware itself. Haptics is a favorite among researchers with the promise of letting users actually feel what is visually displayed by the computer, but to date haptics has been limited to niche markets. University of Reading haptics research leader William Harwin says the technology is ready for a more general audience. His group has produced hardware, software, and control mechanisms for a haptics glove system, where users can feel free-floating objects displayed in 3D. Hands are inserted into rubber cups that are mounted in robotic arms, and allow users to feel objects that would be impossible to create in real life, such as a so-called Klein bottle where the inside and outside surface are the same. Surgical training is currently the staple for haptics firms today, with programs that allow researchers to practice inserting needles into veins with tremendous realism--even replicating the signature "pop" of a successful vein probe. But haptics firms are also delving into other areas, such as mobile phones. Immersion has partnered with Samsung to produce VibeTone technology that allows signature vibrations to indicate who is calling, in the same way different ring tones are used to differentiate between home and work calls. Video game systems also use force-feedback and other haptics technology to give players a more realistic experience. SensAble Technologies CEO Curt Rawley says haptics systems cost as much as one-tenth of what they did previously. SensAble demonstrated a touchable hologram system at last month's SIGGRAPH computer graphics conference.
    Click Here to View Full Article

  • "Is Open-Source Software a Solution to Spam?"
    Chronicle of Higher Education (09/24/04) Vol. 51, No. 5, P. B8; Kiernan, Vincent

    Many higher-education institutions are adopting open-source antispam software because it is free and easy to adapt and integrate with other software already in use. Analyst Fred Cohen observes that research-intensive institutions often opt for open-source software because they have the technical expertise to deploy and support the software, while larger institutions with less technically savvy staff are more inclined to purchase commercial antispam products. Some college officials prefer outsourcing the installation and maintenance of antispam software to third parties so that their own staff can focus on more important pursuits. Another attractive feature of commercial software is the technical support offered by vendors, according to LaGrange College director of information technology James Blackwood. Open-source software could be a core component in the Iowa Internet Annoyance Logging Protocol, which would be comprised of a central database of complaints about specific spam-sending Internet addresses; users who encounter spam would report the violation to their mail systems, which would subsequently notify the central database. A university's mail system could identify likely spam sources by checking incoming messages against a copy of that database, and then block any messages coming from those sites. Des Moines University-Osteopathic Medical Center network administrator George Davey, who devised the Iowa system, says the logging software for computer servers would be open source, thus enabling all Internet users to send spam complaints and boost the database's reliability. College officials have little evidence to determine whether open-source antispam software is more effective than proprietary solutions.
    Click Here to View Full Article
    (Access to this article is available to paid subscribers only.)

  • "What's So Extreme About Extreme Programming?"
    SD Times (09/15/04) No. 110, P. 33; McCay, Larry

    Software development is often compared to manufacturing operations, but the Scientific Method actually provides a better conceptual framework for longer-lived, flexible, and understandable products, writes Larry McCay, a senior software engineer with Probaris Technologies. Extreme Programming (XP) basically adapts the analysis, testing, and re-checking of Scientific Method ideas to software programming. XP focuses on the code itself, making it a channel for communication and documentation of the methodology; in this way, XP should not be seen as some sort of risky approach to software development, but rather a way to ensure software product development is generally understood and adaptable to change. XP contains five basic steps where teamed programmers choose a story; write tests; run tests; refine, program, and refactor; and repeat the entire process until all stories are complete. The Scientific Method requires similar steps, with observation, hypothesizing, experimentation, modification of hypothesis, and finally the declaration of theory. Just as theories created through the Scientific Method are backed up by documented experiments and tested through peer review, software programs created with XP also have documentation and are created with two-person programming teams for verification. With this type of conceptual framework behind them, software programs are easily extendable and modified by future programmers, who can continue the methodology and further develop the architecture. Despite its name, XP makes software development less risky and simpler to understand and modify, McCay concludes.
    Click Here to View Full Article

  • "Our Wireless World"
    U.S. News & World Report (09/27/04) Vol. 137, No. 10, P. 48; LaGesse, David

    Optimistic predictions of a wireless home with universally accessible digital media within five years are undercut by the reality of current wireless technologies, which are marked by implementation and maintenance difficulties, susceptibility to interference, and quality of service issues. Still, developing wireless technologies such as Wi-Fi and Ultra-Wideband (UWB) could resolve these problems. Consumers of home wireless networking products usually employ them to share high-speed Internet connections, and some are also using them to transfer digital files from hard drives to other devices. However, errors and information resending can slow down wireless data transmissions to 50 percent or less of their rated speed, while neighboring wireless networks can interfere with transmissions. Quality of service remains the biggest headache, and new iterations of current Wi-Fi standards aim to fix these deficiencies by modifying network operations: Existing 802.11 standards apportion bandwidth equally among all devices on the network, but developing standards promise to throttle down some kinds of data in favor of signals that should not be disrupted, or double current data rates to support wireless TV transmissions. Complicating the deployment of these standards are such factors as copyright issues, and the incompatibility of many media appliances wireless networks are supposed to link together. UWB, which avoids signal interference by distributing high-speed data across a broad range of radio frequencies, has run into trouble because of fears it could interfere with other devices in those spectrum bands. Another home networking approach uses traditional wires and cables for data transmission, but its rollout could be constrained by speed limitations as well as concerns about cluttered power grids among foreign governments.
    Click Here to View Full Article

  • "Targeting Unstructured Data"
    GeoWorld (09/04) Vol. 17, No. 9, P. 32; Sanderson, Bruce N.

    The growth of unstructured data within companies complicates the location of such files and their information, but a number of document-retrieval systems based on geographic information system (GIS) technologies can help. The "hot links" approach involves the association of a coordinate on a map to a document on a file system, with the link either serving as a direct connection to a file or as a Uniform Resource Locator; this technique allows information related to a feature to be recovered by clicking on that feature, but enterprise deployments are not amenable to the hot links scheme because the manual creation and updating of each link becomes less and less workable as the number of links increases. Cross-referenced file systems arrange or name files so they can be related to a specific feature on a map, with associations created by basing the file name on a particular map feature attribute. However, the approach entails an arduous administration and maintenance effort. In a geo-parsing system, documents are scanned for the occurrence of one or more established georeference points, whose coordinates are associated to documents if successfully located; the coordinate envelope of the existing map extent is then employed to return all documents whose coordinates are encompassed by the extent, and clicking on any document launches the document in its native viewer. Spatial search-bridge technology is rated as the most sensible choice for companies that already possess GIS and other search measures. In addition to boasting easy configuration and self-maintenance, spatial search-bridge systems, which use attribute information from a map to find documents with such attributes and implement queries by appropriate search engines, permit users to refine searches, look for information wherever it resides, and launch searches from any map feature.
    Click Here to View Full Article

  • "Music Everywhere"
    IEEE Spectrum (09/04) Vol. 41, No. 9, P. 42; Mock, Thomas

    Compression algorithms are driving a transformation in the enjoyment and distribution of music, paving the way for massive audio libraries consumers can carry in a portable device about the size of a deck of cards. A vast number of algorithms are vying for the marketplace, with MP3 in the lead, but the most successful algorithms must strike a balance between a variety of factors, including sound quality, the degree of compression, losslessness (how well the original sound data files can be reassembled from the compressed data), support for digital rights management, and backward compatibility with existing hardware and players. Compression algorithms take advantage of unique aspects of human hearing and the brain's processing of audio input, such as frequency and temporal masking, to maintain the fidelity of sound while keeping the file as small as possible. The algorithms first analyze the digitized sound's mathematical patterns and match them against psychoacoustic models to determine which portions of the signal should be de-emphasized or jettisoned, and then scan the signal again to find and remove redundancies. The ideal format will have ample lossless compression and easy streaming, an element that hinges on whether the algorithm can be executed in real time and the connection speed is rapid enough to keep pace. MP3, the most popular existing compression algorithm, supports streaming and reduces CD music to one-tenth its original size, while next-generation mp3PRO can compress music to one-twentieth its original size and is backward compatible. Advanced Audio Coding, also known as AC-2, can support improved and more consistent quality than MP3 at equal or slightly lower bit rates. The most successful compression format will likely come from the Motion Picture Experts Group, which developed MP3, mp3PRO, and AC-2.
    Click Here to View Full Article