HP is the premier source for computing services, products and solutions. Responding to customers' requirements for quality and reliability at aggressive prices, HP offers performance-packed products and comprehensive services.

ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either HP or ACM.

To send comments, please write to [email protected].

Volume 5, Issue 525:  Monday, July 28, 2003

  • "Totaling Up the Bill for Spam"
    New York Times (07/28/03) P. C1; Hansell, Saul

    Measuring the cost of spam, which takes into account such factors as lost productivity and wasted time, is an imprecise science, and estimates on its total toll vary: Ferris Research reckons that spam cost the United States $10 billion in 2003, while Nucleus Research pegs the U.S. cost around $87 billion. Nucleus Research director Rebecca Wettermann says the average worker spends 1.4 percent of his or her productive time dealing with spam, while Wharton School marketing professor Peter S. Fader claims that spam is actually helping the email infrastructure mature, spurring developers to deploy computers and networks that will ultimately become essential to processing valid email. Less optimistic is Indiana University's Brian D. Voss, who notes that spam can add up to additional costs beyond lost productivity, such as the money spent to build an effective spam blocker, not to mention fees for legal services in order to avoid stepping on spammers' First Amendment rights. Though spam filtering software is relatively cheap, Nortel security architect Chris Lewis estimates that the amount of spam that still gets through can add up to $1 million is lost productivity for his company. He reckons deleting spam takes an average of 5 to 10 seconds, but more time is consumed by clever messages that are not obviously spam. Even worse, a senior manager who receives spam could make dealing with it a company-wide priority, which means even more lost time. MCI, meanwhile, is losing money in its effort to handle customer complaints about spam and evict spammers from the network, especially when the jettisoned emailers refuse to pay their bills. Also difficult to measure is the losses sustained by email users defrauded by spammers, as well as instances in which legitimate email is mistaken for spam.
    Click Here to View Full Article
    (Access to this site is free; however, first-time visitors must register.)

  • "Scientists Say 'Nay' to Computerized Voting"
    Baltimore Sun (07/27/03); Shane, Scott

    Computerized voting machines continue their onward march in the face of growing dissent from computer experts, who warn that any computer system used to collect and count ballots needs to be backed up by a paper trail. Most recently, a study from Johns Hopkins and Rice universities found serious vulnerabilities in Diebold Election Systems' touch-screen machines. Maryland, home state of Johns Hopkins, has a contract for 11,000 Diebold touch-screen voting stations for $55.6 million, which state election administrator Linda H. Lamone said she had "absolute confidence" in despite the report. However, in 2001 the State Board of Elections' five-member assessment panel decided 4 to 1 that touch-screen voting technology was not tested well enough, according to Baltimore County IT director Tom Iler, who served on the panel. Additionally, the MIT/CalTech-sponsored Voting Technology Project found that paper ballots counted by hand were the most reliable system, followed by optical scanning machines reading ink marks, and lever machines. Computerized systems came in second to last, just ahead of punch-card systems of the type used in Florida's notorious 2000 presidential election. Touch-screen voting machines have the advantages of supporting multiple languages for non-English speakers and audio capabilities for blind voters. A coalition of over 900 computer experts, headed by Stanford University computer scientist David L. Dill, is not specifically against electronic voting, but is pushing for paper receipts that can be used to verify results. Peter Neumann, principal scientist at SRI International's Computer Science Laboratory, says, "The real problem is that the federal election system standards stink They allow totally unsecure voting systems to be certified."
    Click Here to View Full Article
    To read more about e-voting concerns, visit http://www.acm.org/usacm/Issues/EVoting.htm
    Also, the August 2003 issue of Communications of the ACM features a e-voting commentary by Dill, Bruce Schneier, and Barbara Simons, co-chair of ACM's U.S. Public Policy Committee.

  • "Are Military Computers Safe?"
    IDG News Service (07/24/03); Gross, Grant

    At a hearing to gauge the Department of Defense's cybersecurity plans, witnesses such as Purdue University's Eugene Spafford warned the House Armed Services Committee's Subcommittee on Terrorism, Unconventional Threats, and Capabilities that the security of military computer systems could be compromised by an overreliance on commercial software and outsourcing. Spafford told the subcommittee on July 24 that off-the-shelf applications have saved money, but testified that the reliability and durability of such applications cannot be sustained against cyberattacks. He added that too many military systems support a homogenous software environment, which allows new security flaws to resonate throughout the entire network. Spafford argued that this may force system operators to apply up to five security patches a week for every system they coordinate, a situation he called "unacceptable." Microsoft chief security strategist Scott Charney countered that homogeneous software environments boast easier training and manageability, and allow security vulnerabilities to be patched rapidly. Another worry of Spafford's was the DoD using foreign workers to code the software deployed throughout military systems; Spafford attested that "An increasing amount of this software is being written by individuals we would not allow into the environments where it's operating." Robert Dacey of the General Accounting Office noted that the DoD is concerned about how much time is taken up by patch deployment, worker training, and the distribution of security policies, as well as inadequate policy testing and the need to promote the use of authentication certificates. Other concerns aired at the hearing focused on whether all U.S. military computer systems could be effectively knocked out, and the existence of cyberterrorism training camps.

  • "ACM's SIGGRAPH Addresses Immersive Technology"
    Videography (07/03); Zappier, Alicia

    SIGGRAPH conference chairman Alyn Rockwood of the Colorado School of Mines reports that this year's event taking place this week in San Diego features new sessions on immersive technology, which "expands the experience beyond the standard visual in one direction, such as wearing optics that allow full range of motion or using haptics for touch and dynamic directional sound." A day-long course taught by Spitz's Ed Lantz will focus on how immersive theater technologies and graphics production are being used for hemispherical screens. Lantz explains that such technology is an offshoot of military simulation, and points out that government video professionals should attend ACM's SIGGRAPH because many military technologies embrace components widely used in the entertainment industry. A half-day session organized by Hank Kaczmarski of the University of Illinois' Beckman Institute will revolve around the use of commodity clusters for virtual reality through the presentation of hardware expertise and software tools. Kaczmarski explains that the session is geared toward "anyone who wants to visualize large data sets or create large visualization displays using relatively inexpensive technologies," as well as people who want to develop command and control facilities, and federal and military professionals who want to experiment with augmented reality. SIGGRAPH 03 is also featuring the Academic Village pavilion, where attendees will be able to study the latest research from top U.S. universities, and where these institutions' programmers, engineers, scientists, animators, designers, and artists can showcase and discuss their work. SIGGRAPH is expected to draw approximately 25,000 attendees.
    For more information on SIGGRAPH, visit "Signs of Life in Silicon Valley"
    Wired News (07/28/03); Glasner, Joanna

    Silicon Valley-based job recruiters and search firms are experiencing a rise in business, indicative of growing confidence among regional employers that the economy may be starting to bounce back. Technology Search recruiter Alan Hattman reports that his company has recently found work for people in such areas as network security, software engineering, file systems development, interface development, and kernel and device driver engineering, and he adds that demand is high for Linux operations and object-oriented programming specialists. Complimate CEO Edward Doyle observes that area technology companies are gradually increasing their research and development budgets, which is usually the first step toward a recovery of the job market. Doyle's argument was strengthened by a July 24 announcement that Microsoft intends to boost its global workforce by as many as 5,000 jobs and raise its fiscal 2003 R&D budget by up to 8 percent. However, many recruiters note that entry-level candidates are not qualified for most available jobs; "[Employers are] really more interested in people who can hit the ground running so they don't have to invest a lot of time into their training," explains Taos technical recruiter Misha Castro. Furthermore, the increase in business for IT recruiters has not been accompanied by a halt in layoffs. Foxhunt CEO Mary Voss attributes the slight surge in demand for recruiters to the wide availability of qualified talent, which is a result of the downturn. The Information Technology Association of America reported in May that the size of the IT workforce has been slowly but steadily growing.

  • "Everything Is Watching You"
    Salon.com (07/24/03); Manjoo, Farhad

    Robotics scientists have come full circle on the best way to get a computer to understand the physical world around it, from giving a robot "eyes" to direct rays of light into a "mental image" of its surroundings, to having objects in a room identify themselves to a robot. But the new approach to having machines emulate human beings is also being hailed as a way to revolutionize retail sales in that manufacturers could embed their products with identifying technology, which would allow companies to collect information on buyers. Researchers at MIT are working to create an inexpensive, industry-standard product-tagging system using radio frequency identification (RFID) technology. However, the RFID technology is generating some concern from consumer privacy advocates who believe companies will use it to monitor every aspect of the lives of consumers, in real time. A published report in the trade publication Smart Labels Analyst reveals that in a Tesco store in Cambridge, England, a surveillance camera focused on shoppers every time they removed a package of tagged Gillette razors from an RFID "smart shelf." Retailers see electronic tagging as offering greater benefits of controlling inventory and the supply chain, and add that RFID technology could very easily be turned off once the products leaves a store. The technology could allow shopping carts to calculate the nutritional value of food placed in them and automatically check shoppers out of stores, enable packages to sort themselves for recycling, allow products to alert buyers when they are recalled, and enable washing machines to tell users that colors are being washed with whites, among other things. RFID technology would create an intelligent network of objects all around us, an "Internet of things," explains MIT's Auto-ID Center.

  • "A Gadget Geek's Dream Come True: Punch 'Print' for Anything You Want"
    Small Times (07/25/03); Pescovitz, David

    Desktop manufacturing could allow people to construct new devices on-demand, using 3D printers that build mechanic and electronic parts out of organic polymers. If someone's blender broke, for instance, they could simply look up the design for a blender on the Internet and press print, and their desktop 3D printer would go to work building a new one. The technical foundations for this scenario are being worked out at the University of California, Berkeley, where organic electronics, 3D printers, and polymer actuators are being fused. Product designers already commonly use Fused Deposition Modeling machines to build passive functional parts out of ABS plastic. UC Berkeley's organic electronics researchers are working on ink containing gold nanocrystals that allow electronic circuits to be printed on flexible substrates. Combined with 3D printers, electronics could be built into a device's plastic chassis. A third technology, electromechanical actuators, would enable these devices to be turned on and off. The 3D printer could embed electroactive polymers that generate a small amount of voltage when flexed, such as when pressed on a button or in a switch. The UC Berkeley research is also starting a library of basic component designs for grippers, joints, and transmission systems that can be used in future devices manufactured on demand.

  • "Computer Language Translation System Romances the Rosetta Stone"
    USC Information Sciences Institute (07/24/03)

    University of Southern California computer researcher Franz Josef Och has created a computer language translation system that uses statistical models to find the most probable translation for given inputs. "Instead of telling the computer how to translate, we let it figure it out by itself," Och explains. The first step is to feed the system with a "Rosetta Stone" of foreign-language texts and their English equivalents, which the computer uses to refine the criteria of a statistical model of the translation process; to translate new text, the system follows the model to determine the most likely English version of the foreign input. This breakthrough would not have been possible without technological advancements allowing computers to use phrases, rather than words, as basic language units, and the use of multiple English translations to enable the computer to more accurately score its renderings. Och notes that most of the elements the statistical approach encompasses are independent of language. Och's system proved its superiority over other Arabic-to English and Chinese-to-English machine translation systems--both commercial and experimental--in the 2003 Benchmark Tests held this past May and June by the National Institute of Standards and Technology. "Give me enough parallel data, and you can have a translation system for any two languages in a matter of hours," Och declared after his system scored highest. The system was also put through its paces in June when teams of U.S. researchers competed to build machine translation tools to accommodate Hindi texts in an event hosted by the Defense Advanced Research Projects Agency.

  • "Tech Giants Team on Secure-Computing Standards"
    TechNewsWorld (07/25/03); Lyman, Jay

    IBM, Hewlett-Packard, and Sun Microsystems have announced a joint effort with smaller security companies such as RSA Security, Tripwire, and InstallShield to develop non-proprietary secure-computing standards that are easier to integrate than proprietary standards. "The whole concept of security, by the black-box approach where people can't get under the hood, leads to insecurity in a lot of ways," notes Forrester research director Michael Rasmussen, who explains that multi-vendor consensus on application program interfaces (APIs), for instance, and the integration of disparate security techniques and measures will improve industry-wide security robustness. IBM, HP, and Sun have a competitive relationship, but Sun's Mark Thacker says it would serve the interests of all three vendors to shield the computer systems their clients use. He also observes that the vendors also share customers with each other. Aberdeen Group research director Eric Hemmendinger asserts that standards are key to establishing interoperability between vendors, citing technologies such as secure sockets layer X.509 and IPsec as just a few examples. Hemmendinger says that supporters and opponents of open-source security have both made strong arguments, but choosing a proprietary or non-proprietary security approach will be situation-specific and determined by the potential of making money selling open-source products. Rasmussen argues that the participation of Tripwire and other small vendors in the initiative demonstrates that the companies wish to give Symantec, Computer Associates, and other major security firms a run for their money.

  • "WiFi Is Open, Free, and Vulnerable to Hackers"
    Washington Post (07/27/03) P. A1; Krim, Jonathan

    Unprotected Wi-Fi access is epidemic, leaves the door open to data thieves, and could serve as an anonymous launching pad for hack attacks, according to experts. The problem will only grow with Wi-Fi, especially as hardware vendors continue to refuse to change security default settings. The current standard protection, Wired Equivalent Privacy, provides basic encryption protection that can be defeated, but in most cases is never turned on by complacent users. Vendors, for their part, say users do not want added complexity, but are likely shirking the increased support costs of default security, which is time-consuming to set up and can cause network problems. T-Mobile, which operates Wi-Fi networks in Starbucks Coffee shops, says it does not employ encryption on its network despite the threat to users and instead encourages people to put up firewalls. International Data says the number of regular U.S. households using Wi-Fi networks will double this year, up from 3.1 million last year. Over the last few years, National Defense University instructor Lt. Col. Clifton H. Poole has taken an informal, ongoing census of wireless networks along his 23-mile route from home to work in suburban Washington, D.C. Although his survey shows the percentage of protected networks has increased over the last few years with the overall number of access points, still about 40 percent are left completely vulnerable; Poole and other computer security experts warn this type of open access could be exploited by criminals who could unleash devastating viruses or hack into systems without having to cover their Internet point of origin.
    Click Here to View Full Article
    (Access to this site is free; however, first-time visitors must register.)

  • "Privacy: For Every Attack, a Defense"
    Business Week (07/22/03); Black, Jane

    Americans' privacy is under threat from government and corporate abuse of technology, but at the same time is being protected by groups savvy to the danger. E-Loan CEO Chris Larsen is one of those protecting privacy, and has bankrolled a California ballot initiative that would prohibit financial services firms in that state from sharing customers' personal data without their consent. Experts say increased awareness has spurred important changes in the area of business marketing, yet government surveillance and online threats continue to grow. Internet expert and author David Weinberger says one of the most serious threats to privacy is a convergence of three computer technologies: Digital IDs, digital rights management (DRM), and Microsoft's Trusted Computing program. In conjunction, these three technologies would not only stop a consumer from copying music without authorization, but could conceivably identify that person to the music industry. Weinberger says that, despite earlier expectations, the Internet is becoming more closely monitored and restricted than the real world. The FTC's Do Not Call list is one example of the type of institutionalized privacy protections needed in other areas, say privacy advocates. Still, businesses continue to collect and share information that could be used to profile high-risk insurance accounts, for example. Because of the Sept. 11 attacks, Americans are willing to give the government significantly more leeway in tracking citizens, according to a Harris poll conducted in February; that survey found 44 percent of people accepted government monitoring of their email and cell phone conversations in order to combat terrorism, down from 54 percent who agreed immediately after the 2001 attacks.
    Click Here to View Full Article

  • "The Tortoise, the Hare and the Internet"
    Toronto Star (07/28/03); Geist, Michael

    As the Internet developed rapidly in the mid 1990s, governments around the world generally agreed to leave Internet regulation to the private sector in the belief that would allow rapid adjustment and accommodation of changing needs. However, the experience so far has shown that the private sector and government can be equally ineffective in implementing erudite and fast-changing Internet policy. The Internet Corporation for Assigned Names and Numbers, the Californian nonprofit given governance over Internet matters, has been regularly rehauled since its inception five years ago and has had its by-laws repeatedly amended. Its continuing evolution is unprecedented, but shows that such systems can be riddled with complications despite not directly involving traditional government bureaucracy. Governments had previously decided that the obvious institution for dealing with international Internet matters, the United Nation's International Telecommunications Union, would have been too stodgy. In the United States, the effect of quick government technology policy can be seen in the history of the 1998 Digital Millennium Copyright Act, which has stirred controversy by jailing a Russian software programmer and stifling university research. In Canada, the slower approach to digital copyright has already yielded a balanced Canadian Supreme Court decision and nationwide consensus-gathering efforts.
    Click Here to View Full Article

  • "US Passports to Carry Digitally Signed Images"
    New Scientist (07/23/03); Knight, Will

    The U.S. governments plans to issue "smart" passports, featuring embedded microchips that store a compressed image of the owner's face, to U.S. citizens in October 2004. Designed to prevent tampering, the digital passports will include cryptographically signed digital images to guarantee their authenticity. Although civil liberties groups have expressed concerns about the government using the new passports as a monitoring tool, Frank Moss, deputy assistant secretary for Passport Services at the U.S. Department of State, maintains that information will only be forwarded to centralized databases if there is a query over the authenticity of a passport. What is more, Moss says the passports will only include basic passport information. Some technical experts have also warned that smart passports do not guarantee safety, adding that the new passports will only help to identify known suspects or people who have forged passports. Richard Clayton, a hardware security expert at Cambridge University in the United Kingdom, adds that everyone involved in the Sept. 11 terrorist attacks had a photo ID. Meanwhile, the European Union plans to spend 140 million euros to develop an interoperable biometric system, which would enable passports to carry fingerprints and iris scan biometric data. Such biometric information would be much easier to cross-reference than photographs of an individual with different hair styles and facial hair.
    Click Here to View Full Article

  • "State Laws Fail to Suppress Spam"
    Stateline.org (07/24/03); Scavongelli, Sara

    State laws have not been able to rein in spam, which continues to grow at astronomical rates. Of the 35 states that have laws regulating spam, Delaware may have the toughest statute, in that residents are required to sign up to receive unsolicited commercial email. However, spam is flourishing in Delaware and the state has not prosecuted any cases against spammers. Ray Everett-Church, counsel for the Coalition Against Unsolicited Commercial E-mail, which supports legislation to curb spam, says states generally adopt similar tactics (opt-out rules, label advertising subject lines, and a valid return address) with hopes that one strategy would work. "What most of the state statutes do is legitimize spam as long as you don't also commit fraud," explains David Sorkin, a professor who teaches a spam law course at the John Marshall Law School in Chicago. A No-Spam registry, which would be similar to the Do Not Call list monitored by the FTC, is the strategy of choice for some antispam advocates, but Microsoft and the high-tech trade group AeA are among the parties that oppose such an idea. Microsoft is concerned that bulk spammers could find a way to get the list, and AeA says a No-Spam registry would hurt legitimate marketers. Some antispam advocates argue that a No-Spam registry, an idea the Philadelphia-based ePrivacy Group says 74 percent of consumers support, is the only workable solution on the national level.

  • "Engineers' Forecasts for Technology"
    Futurist (07/03) Vol. 37, No. 4, P. 8

    The 2003 technology survey of Electrical and Electronics Engineers (IEEE) fellows asked participants about what would be significant technology issues over the next 10 years. Two-thirds predicted that open-source computing would have a large impact on the computer sector. According to one respondent, open-source computing will allow the most gifted scientists to improve people's lives despite the effects of a sluggish economy. The engineers also forecasted that an "information appliance" would take the place of telephones and Internet access, and perhaps also replace the functions of PDAs, televisions, and PCs. Individual appliances would be rendered obsolete as a result. Another trend predicted by the engineers is that high-speed Internet access would become commonplace in industrialized nations within 10 years, spurred by such services as videoconferencing and video on demand. The respondents also believed that Moore's Law, the theory that transistor capacity doubles every 18 months, would still be valid for at least five more years, but could be challenged by such upcoming technologies as nanotech, photonics, biological-centric computing, and quantum computing.

  • "Sensitive Sensors"
    CIO (07/15/03) Vol. 16, No. 19, P. 74; Edwards, John

    A pair of scientists at the State University of New York at Buffalo's mechanical and aerospace engineering department has developed a powerful new magnetic sensor for PC hard drives. The nanoscale sensor creates large electrical resistance changes, which are used by hard drives to distinguish between active and dormant bits on magnetic material. Associate professor Harsh Deep Chopra and professor Susan Hua placed tiny filaments of nickel between two electrodes to create the sensor's magnetic fields. The sensors are enhanced by a phenomenon called BMR (ballistic magnetoresistance), which takes place when electrons can travel through a constricted conduit without spreading out. "Using our manufacturing method, we are able to get a change in resistance of up to 100,000 percent" over existing systems, says Chopra. He envisions the new approach could increase disk storage capabilities from today's 10 gigabits to 30 gigabits to terabits per square inch. And since the sensor can function at room temperature, the technology could be readily added to current hard drive systems without much extra cost, says Chopra. Such technology could one day allow computers to have as much storage capacity as a supercomputer, but be small enough to wear on the wrist, he says.

  • "The Sky's the Limit"
    Fast Company (07/03) No. 72, P. 90; Fishman, Charles

    Global positioning system (GPS) technology, which is the basis of "location-aware" products, has progressed far beyond the precise military targeting applications it was originally developed for, and is finding use in both commercial and non-commercial sectors. GPS users rely on 28 satellites managed and maintained by the Air Force--at least 24 are needed to constantly cover the entire earth, while at least three satellites are required to triangulate one earthly position with astonishing accuracy. That accuracy is provided by each satellite's knowledge of its exact orbital position and the time. The $9 billion GPS satellite network supports a $4 billion a year industry, according to Frost & Sullivan; the U.S. government is spurring growth by setting a late 2005 deadline for all U.S.-sold cell phones to be location-aware so that 911 operators can accurately pinpoint callers in distress. GPS enables trucking companies such as Michigan-based Con-Way NOW to keep track of all their trucks in order to facilitate "time-definite delivery." Con-Way NOW customer-service representatives can retrieve a wealth of data about trucks in operation, including current latitude, longitude, map location, speed, and direction, via GPS. Archeologists are using GPS to trace their way back to hard-to-find excavation sites, while geologists employ the technology to measure the movements of tectonic plates. Meanwhile, the telephone system, Wall Street, and many other businesses value the precise time measurement of GPS even more highly than its location-finding capabilities.

  • "Antennas Get Smart"
    Scientific American (07/03) Vol. 289, No. 1, P. 48; Cooper, Martin

    Adaptive antenna arrays aim to lower the cost and upgrade the quality of wireless communications by transmitting their signals directly to mobile users and enhancing connections with individual cell phones via signal manipulation, while keeping interference from other users to a minimum. Moving the radio beam electronically is less of a headache than physically redirecting the array to point at the intended recipients; one method, beam switching, can be easily adapted to wireless networks, but is susceptible to multipath reflections and signal deterioration along the edge of the beam. An adaptive array's key component is a digital processor that can tweak incoming signals, selectively amplifying desired transmissions while excluding all others. The processor achieves selective signal transmission and reception by solving a series of synchronous equations, while maintaining a link to users in motion requires the processor to regularly solve the equations using continuously updated data from the array. The array can locate individual signals more precisely and boost signal gain with additional antennas. The most sophisticated arrays can harness multipath reflections by incorporating them into the equations the processor must solve, increasing the accuracy of the signal's path and the cell phone user's location even further. Wireless networks that use adaptive antenna arrays enjoy a reliance on fewer base stations, which reduces network deployment and maintenance costs; and they offer support for about six times as many users as voice communications and up to 40 times as many for data transmission, adding up to less interference and better service. Wireless data networks can also benefit from adaptive arrays, which can send and receive more data to users in a given block of frequency spectrum. This capability could make smart antennas an anchoring technology for the wireless Internet.

[ Archives ] [ Home ]