Association for Computing Machinery
Timely Topics for IT Professionals

About ACM TechNews

ACM TechNews is published every week on Monday, Wednesday, and Friday.


ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either AutoChoice Advisor or ACM. To send comments, please write to technews@hq.acm.org.
Volume 6, Issue 631:  Wednesday, April 14, 2004

  • "Sneak Peeks at Tomorrow's Office"
    Business Week (04/13/04); Kharif, Olga

    Although new communications tools and mobile devices have dramatically improved office productivity so much that many companies do not expect drastic new innovations, but new office technology is already working in laboratories and technological advances will make them affordable in the near future. At that time, perhaps more people will have a 4,000-pixel screen wrapping around their desk, like human-machine interaction researcher Greg Welch does at the University of North Carolina at Chapel Hill. Welch can view numerous documents simultaneously on the huge display, which is about as wide as three 17-inch monitors. He is working on 3D projection technology that will be useful in conference meeting presentations. Besides lowered costs, new office technology will be pushed forward by an aging workforce, growing job complexity, and a decentralized office concept. Mark Greiner of Office furniture firm Steelcase says, "We're moving into an era when we'll have more technology, but it won't be as apparent." Ideo designer Fred Dust says offices themselves are becoming less office-like, with larger and more comfortable common areas for workers who increasingly have to collaborate to finish complex projects. Greiner says RFID-equipped badges would automatically boot up an employee's PC when they arrive at the office front door, while office lighting could signal to coworkers whether someone is available or not. Computer software will also aid workers, summarizing conference calls and allowing users to retrieve files using criteria other than the file name, says Sandia National Laboratories technical staff member Chris Forsythe, who is currently using some of the prototype technology. Microsoft is also working to make office life easier by improving videoconferencing and desktop security. Instead of using passwords to secure computers, users may be required to click on different points in a familiar image in sequence, for example.
    Click Here to View Full Article

  • "Machine Rage Is Dead...Long Live Emotional Computing"
    Observer (UK) (04/11/04); McKie, Robin

    Emotionless decision-making is not really so effective, according to British researchers who are building computers sensitive to human feelings. "The cold, unemotional Mr. Spock on Star Trek simply could not have evolved," explains Salford University artificial intelligence expert Ruth Aylett. Humans depend on emotions to communicate and make decisions with more nuance, and allowing computers to tap into those emotions and respond will make them more useful. Automated voice response systems could sense a caller's frustration and patch in a live customer service agent before the caller hangs up, or a computer game could slow the tempo of a game and play soothing music before an agitated player looses their temper. These types of solutions are being developed as part of the European Union's Humaine program, meant to give the region a lead in emotional computing. Humaine projects include the Face Station software that detects user emotion based on Web cam images, humanoid robots used to teach autistic children how to read other people's emotions, and a virtual reality system that helps schoolchildren learn not to bully others. Much of the program's work is based on new psychological research that links emotion and physiological responses, such as quickened heartbeats, short breaths, blood pressure, and even patterns in keyboard use. Measuring these human responses to emotion is critical to enabling more useful machines. Project collaborator Dylan Evans says that while the computers themselves may not feel emotions, they will be able to sense and respond to emotions in the same way humans do. Evans is currently investigating whether emotionally sensitive robots would be effective in the care of the elderly.
    Click Here to View Full Article

  • "Australian Societies Adopt ACM/IEEE-CS Code of Ethics"
    Age (AU) (04/13/04); Adams, David

    Two leading software engineering groups in Australia have adopted the Software Engineering Code of Ethics and Professional Practice, an internationally recognized code of ethics developed by the ACM and IEEE-CS. By embracing the code, the Australian Computer Society (ACS) and Engineers Australia have agreed to follow eight principles, which include acting in the public interest and in the best interest of clients, making products of the highest standard, and being fair to and supportive of colleagues. The software engineering groups view the international standard as supplementing their existing code, with ACS Chief Executive Dennis Furini saying the new code reinforces standards that are expected in Australia. Professor Donald Gotterbarn of the Software Engineering Ethics Research Institute, part of the Department of Computer and Information Sciences at East Tennessee State University, says the adoption of the code by software engineers in Australia is "a vital piece in the jigsaw puzzle for the international professional software engineering community." Gotterbarn headed the taskforce that developed the Code.
    Click Here to View Full Article

    To read the ACM/IEEE-CS Code of Ethics and Professional Practices, visit http://www.acm.org/serving/se/code.htm.

  • "Concern Grows Over Browser Security"
    CNet (04/12/04); Reardon, Marguerite

    The Computing Technology Industry Association's second annual report on IT security and the work force indicates 36.8 percent of respondents experienced one or more browser-based attacks during the last six months, up from 25 percent the year before. Browser-based attacks occur when users view a Web page and hidden code is used to compromise security. Sometimes all that happens is the browser crashes, but hackers can also use browser attacks to steal information. Emails are often used as carriers for the attacks; the emails contain a link to a malicious Web server, and the attack is generally launched when the user clicks on the link. Since most firewall products do not inspect out-going traffic, this type of attack is often not protected against if users are complicit. Products are available to monitor and control corporate Web usage and some firewall vendors have added protections, but these will not eliminate the problem, according to association director Randall Palm. Palm says, "Browser-based attacks are a logical evolution. The better we get at stopping attacks, the more creative hackers get at writing new ones." Browser vendors are trying to add protections as well, but companies still consider viruses and worms to be a bigger security risk. However, there are fewer worm and virus attacks than a year ago, the survey says, and network intrusion issues are also less common. The association reports that 95.5 percent of organizations use antivirus technology, with firewalls and proxy servers in use by 90.8 percent of respondents.
    Click Here to View Full Article

  • "Gopher: Underground Technology"
    Wired News (04/12/04); Sjoberg, Lore

    Before the Web came along and stole its thunder, the "gopher" protocol from the University of Minnesota allowed nontechnical users to view Internet data in a standard format and through a simple visual interface. Named after the university's mascot and created in 1992, gopher has a committed following. There are more than 250 active gopher servers around the world, with nearly half of these sites associated with American universities and colleges, according to Floodgap.com. The largest actively maintained gopher server is Quux.org, belonging to 24-year-old Kansas programmer John Goerzen, who also maintains the original gopher code from the University of Minnesota. He says one of the greatest appeals of gopher is its simplicity; an experienced network programmer can create a robust gopher server in a matter of hours. Quux.org is also about preserving the esoteric, uncommercialized information that exists only in gopherspace before it disappears, he says. While maintaining gopher is important in terms of preserving Internet history, some developers are also using it to push new boundaries forward. Port-a-Goph is a gopher browser under development for the Palm OS, something that gopher enthusiast Cameron Kaiser created and set up on his own Palm handheld. Goerzen says the ease of developing Port-a-Goph contrasts with today's handheld Web browsers, many of which still cannot properly display Web sites. The Apache Web server also recently received a gopher module, allowing it to serve up both gopher and Web pages. Mozilla-based browsers, Netscape, and some Internet Explorer browsers also support the gopher protocol to some extent. Even Web users can access gopher pages through a proxy at Floodgap.com.
    Click Here to View Full Article

  • "Radio Tags May Give Consumers More Power"
    SiliconValley.com (04/11/03); Gillmor, Dan

    Along with the privacy concerns and benefits to corporate business, radio-frequency identification (RFID) technology also promises greater amounts of useful information to consumers when and where they need it. In the future, people will be able to use handheld devices to scan product RFID tags, query a local server, and receive information specific to that product, according to University of Tokyo professor Ken Sakamura, who runs the Ubiquitous Networking Laboratory. The laboratory has created demonstration mock-ups, such as a bar where a "Ubiquitous Communicator" device pulls up a specific multimedia advertisement when passed near a whiskey bottle. The Web already provides tremendous amounts of product information from other users, critics, and companies. But handheld scanners--possibly integrated into mobile phones--coupled with pervasive wireless networks, would give consumers unprecedented amounts of product information on the spot. The Ubiquitous Networking Laboratory envisions several other scenarios, including RFID information about when grocery produce was harvested and whether it is organic, and whether pharmaceuticals will produce a toxic reaction when taken together. Microsoft is also working on consumer-oriented product ID applications that can be effected now using traditional bar codes. Using a handheld scanner linked wirelessly to a remote server, researcher Marc Smith is able to scan items in the grocery store and receive information collected from Web sites such as Google and Amazon.com. The Aura technology translates the scanned bar code into product descriptions, collects Web information on the product, then sends it back to the handheld device. More advanced RFID technology would also enable consumers to make annotations on products, further empowering buyers.
    Click Here to View Full Article

  • "A 'Free' Boost for Multimedia"
    IST Results (04/13/04)

    A free software project for multimedia production, backed by the European Commission, is gaining momentum and rivals the quality of proprietary offerings, according to proponents. AGNULA is basically GNU/Linux-based audio software for home musicians and audio enthusiasts. The AGNULA technology comes in two GNU/Linux flavors, the Debian packaging system (DeMuDi) and the Red Hat packaging system. Project manager Nicola Bernardini emphasizes AGNULA's free software distinction from open source technology, which he says does not provide the same clear and specific licensing policies. Free software addresses users rights and freedoms the term "open source" does not, he says. Several years ago, a group of dedicated GNU/Linux users who were also musicians and audiophiles came up with AGNULA as a way around the laborious recompilation of audio software for other operating systems. AGNULA supports access to hard-disk recorders and CD players, and provides mod suites, MIDI file players, and top-of-the-line software synthesis languages. Bernardini says AGNULA was not conceived solely as competition for Microsoft's Windows Media Player, but that Microsoft had targeted the multimedia market because it was open to standard monopolistic tactics. He points out that AGNULA dovetails with the current efforts by European regulators. AGNULA proponents will be evangelizing their project at open source developer events such as the upcoming Linux Audio Conference in Germany this May, and that the group is actively looking for technology partnerships with like-minded development groups. Despite the cessation of European Commission funding this March, AGNULA will likely continue as a volunteer-based project and receive some technology and funding support from academic institutions, says Bernardini.
    Click Here to View Full Article

  • "Turning Robots Into a Well-Oiled Machine"
    Newswise (04/12/04)

    Researchers from three U.S. universities are working to create autonomous teams of robots that can assist at disaster sites. Emergency response personnel say controlling robots is a time-consuming task given the amount of data human operators have to deal with. If teams of robots could be controlled as one, then about a hundred robots could be deployed and controlled by just a few human operators. The University of Minnesota, University of Pennsylvania, and Caltech are each contributing their special expertise to the project, which is supported with $2.6 million from the National Science Foundation. Getting robots to work together as a team is difficult, not only because of the complexity but also because of the physical limitations of the robots themselves, says project leader Nikos Papanikolopoulos who leads the University of Minnesota's Distributed Robotics Lab and developed the Scout robots used in the current project. Scouts are about the size and shape of an inside tube of a toilet paper roll, are made from off-the-shelf components, and can withstand a 100-foot toss or six-story drop. Equipped with a video camera, infrared detection, heat sensors, and other sensors, and led by a larger MegaScout, teams of these robots would be able to perform more complicated tasks than if working independently. The MegaScout, also developed at Minnesota, has a mechanical arm in addition to larger sensors, and can open doors or lift other Scout robots over obstacles. Human operators would send commands to the lead robot, and could operate about three teams of robots at one time. With robotics expertise coming from Minnesota, the project also benefits from control theory and robotic vision expertise at the University of Pennsylvania and exploration and mapping technology from Caltech.
    Click Here to View Full Article

  • "Robot Guided By Its Voice"
    Technology Research News (04/14/04); Patch, Kimberley

    The University of Toronto, which uses a robot to guide visitors through its Artificial Perception Lab, recently enhanced the machine with sound source localization. As the robot makes its prerecorded remarks about different parts of the facility, microphone arrays embedded in the walls determine the location of the robot to within seven centimeters. Researcher Parham Aarabi said the system was cheap and simple, though at the expense of some accuracy. The system requires just a two-second snippet of sound in order to determine the robot's location, which it then feeds back to the robot for more detailed navigation planning. Previously, the robot tour guide was plagued by navigational inaccuracies. The motorized robot also uses touch-sensitive whiskers to prevent collision with obstacles. When faced with an obstacle, the robot backs up, reorients itself, determines a new course, and stores the information for future reference. Aarabi says integrating the different components was the most difficult part about building the robot, especially processing the sound source localization data in real-time. The sound-based approach to navigation is markedly different than sight-based methods which mimic human functions. Aarabi says future robots will incorporate many different navigational tools, including sound and sight. The University of Toronto is also developing intelligent software for the robot so that it can understand and respond to impromptu questions.
    Click Here to View Full Article

  • "Blind People to 'See' Color By Touch"
    Australian Broadcasting Corp. News (04/14/04); Catchpole, Heather

    Artur Rataj, from the Institute of Theoretical and Applied Computer Science at the Polish Academy of Sciences, has created computer software for translating color images into tactile form, allowing blind people to discern color information in images. Several techniques are used to translate images into forms blind people can understand, including using Braille dots in different densities and an expensive process that involves vacuum-treated plastic and 3D sculptures. Rataj improves on a third method, which uses tactile graphics made from raised lines and dashes. These had previously been only in black and white, but Rataj's software adds color information by presenting lines at different rotational degrees in order to differentiate color. Yellow is represented by vertical dots while horizontal dots denote the presence of blue. Combinational colors such as orange are communicated by angling the lines between the two primary colors. The software communicates color intensity by placing the raised lines closer together. Rataj says the system will allow blind people to recognize images more quickly and that he is testing it now with users. Quantum Technology's Tim Connell, whose firm builds a tactile graphic printer, said people who have been blind for life find little meaning for color. Unless color carries some meaning, it is just an aesthetic property, he said. Previously, color images had been translated into tactile graphics by human interpreters, but never by a computer.
    Click Here to View Full Article

  • "The Porous Internet and How to Defend It"
    E-Commerce Times (04/10/04); Millard, Elizabeth

    Network researchers say the open TCP/IP Internet protocols mean criminals have easy access to their targets, and that there is no simple way to change Internet design. Transmission Control Protocol/Internet Protocol (TCP/IP) was developed to be as open and transparent as possible. Internet designers had no idea the network would become so large and actively tried to lower barriers, not create them. As a result, Internet data packets are not easily traceable if the sender wants to obscure their origin and hackers can probe remote networks with impunity, says Columbia University computer science assistant professor Angelos Keromytis. AT&T Labs research fellow Steve Bellovin says many of his colleagues think TCP/IP is flawed, but he believes the technology receives undue blame for the current state of Internet security. Bellovin explains that roads and highways are not blamed for bank robberies--bank security takes the blame. Similarly, open Internet design should not be blamed for faulty Internet security, but local defenses for each private network need to be set up. And while open network protocols often facilitate security breaches, they also provide for easy patch application and scalable security management. Bellovin says the problem of network security will only grow in the future, with ubiquitous wireless and ad hoc networks. At that time, cryptography will play an even greater role. Changing the fundamental structure of the Internet is beyond the influence on any single institution, since the Internet is composed of so many stakeholders and effecting a fix would mean replacing so much software.
    Click Here to View Full Article

  • "Computers Learn to Understand Sefrican"
    Sunday Times (South Africa) (04/11/04); Moodie, Gill

    Researchers in South Africa are testing a voice-recognition system that is designed to understand the various languages and accents spoken by South Africans. Professor Justus Roux, director of the Research Unit for Experimental Phonology at the University of Stellenbosch, says the voice-recognition system at times has sounded natural in its responses to test subjects, but there also are times when the system was unable to understand the speaker. Roux is heading the team of computer scientists, electrical engineers, and linguists from across the country that has spent the past four years recording the languages and accents spoken by South Africans and transcribing telephone calls phonetically for the voice-recognition system. "Age, gender and the tempo--how fast or slow you say words--all have an influence, so your system must allow for lots of variations, not just accents," says Roux. The researchers still have to convert the speech-recognition system into a translation system. While voice-recognition systems are being used by the U.S. military and commercially in Europe, efforts to adapt the technology to the accents of South Africans have been unsuccessful. Ultimately, the voice-recognition system would enable South Africans to talk to machines connected to the Internet to conduct business with their bank, or to book flights or hotels. The voice-recognition system could even work with the electronic gateway the government is building to allow South Africans to access government services.
    Click Here to View Full Article

  • "Spaced Out on the Interplanetary Internet"
    TheFeature (04/09/04); Pescovitz, David

    NASA is working on space networking technologies that could produce significant benefits for Earth applications. The Interplanetary Internet was launched in 1998 with funding from the Defense Department, and signed on Internet pioneer Vincent Cerf as its research head. Cerf describes the goal of the project as the establishment of data "post offices" throughout the solar system. The Interplanetary Internet will not allow the type of instantaneous, two-way communication Earth users enjoy, says NASA Jet Propulsion Laboratory principal engineer Adrian Hooke. Light speed messages take about 20 minutes to travel from Earth the Mars, for example. Hooke says the best scenario will a standardized data protocol that will allow separate space-based networks to link together as needed. There is even a standards body for coordinating different space agencies' networking protocols: The Consultative Committee for Space Data Systems. Interplanetary Internet depends on a delay-tolerant network (DTN) protocol overlay that allows data packets to be stored at intermediary locations when there is no other connection route, then to be passed on when a route opens up. Intel Research and the University of California at Berkeley are collaborating to further DTN technology, enabling the network to more accurately predict the reliability of message routing. NASA's Hooke says various space networking technologies will be deployed with new missions, culminating in the 2009 launch of the Mars Telecommunications Orbiter, which will act as a dedicated space networking hub for Mars-Earth communications. Intel and Berkeley DTN researchers see other users, such as improving Internet use in developing nations, environmental sensor networks, and military applications.
    Click Here to View Full Article

  • "Drive-By-Wire Closer Than You Think"
    IST Results (04/07/04)

    European researchers are developing drive-by-wire capabilities that should reduce traffic accidents on the Continent by half. Researchers working on the PEIT project, scheduled for completion this summer, have created an electronic control unit (ECU) that is capable of taking over driving functions should sensors indicate major driver error. Dangerous situations that might induce the ECU to take over include an unexpected obstacle in the road or too much speed on a curve. Drive-by-wire technology used to correct human error is based on the idea that only one motion vector is correct for the vehicle at any one time. If a human driver positions the car so that it exceeds the safe limits, then the ECU will take corrective action. PEIT implemented the ECU in a large truck and small car in order to demonstrate the system's applicability. Participants including DaimlerChrysler say the PEIT program will help boost road safety, improve traffic flow, and vehicle usability. Eventually, researchers hope to create virtual driver systems that not only react to human operator error, but also predict dangerous situations and prevent them from happening. Such systems would rely on a diverse number of sensor inputs to determine vehicle position, speed, and road conditions. The PEIT unit is based on a dual-duplex architecture already used in the ECU Airbus A380 aircraft; the aerospace industry has long made use of fly-by-wire technology. With four processors separately carrying out the same instructions, the system sets an incredibly high fail-safe standard. The European Commission's Information Societies Technology program funded PEIT, and a follow-up project called SPARC is aimed at developing a command level for the system.
    Click Here to View Full Article

  • "Internet Congress Convenes"
    IDG News Service (04/06/04); Gross, Grant

    The first Internet Commons Congress (ICC) was held recently in Rockville, Md., drawing some 150 participants, including privacy advocates, free-speech activists, and members of the free-software community. The aim of the event was to foster greater communication and solidarity among the leaders of various grassroots "save the Internet" movements, said telecommunications analyst Daniel Berninger. Berninger and the New Yorkers for Fair Use created the ICC. The event tackled several issues, including VoIP regulation, the free software movement, e-voting, Internet architecture, and media concentration. Internet guru Ian Peter chose to pass up attending a United Nations summit on Internet governance in favor of attending the ICC. "What goes on in this room is far more important to the future of the Internet" than the U.N. summit, Peter said. A follow-up event to the ICC will be held in Washington in May, to address the issue of an Internet commons treaty. Berninger expects to hold several events similar to the ICC every couple of months.
    Click Here to View Full Article

  • "Quantum Computing: Bit by Bit"
    Economist (04/09/04) Vol. 37, No. 8369, P. 81

    In the quest to create usable quantum computers, scientists are tackling basic questions about how to define the boundary between the quantum and macroscopic worlds. In the mid-1990s, Researchers at AT&T Bell Laboratories brought quantum computing from hypothesis into established fact. Since then, physicists have built a number of basic quantum computers using ions super-cooled by lasers, pulsed laser light, nuclear-magnetic resonance, superconducting junctions, and quantum dots. These machines create qubits, the basis of quantum computing, by getting atomic nuclei or the electrons orbiting them into superposition. The laws of quantum mechanics allow qubits to possess both 1 and 0 properties simultaneously, though eventually the electron or nucleus will have to chose only one value. The process of choosing between values is called decoherence, and was the subject of discussion for the American Physical Society meeting last month. Decoherence happens in just a fraction of a millisecond, but can be investigated. Los Alamos National Laboratory researcher Wojciech Zurek reported at the conference on his group's work to understand decoherence using the Loschmidt echo, which measures changes in a quantum system's energy. While it is generally held that outside interference with qubits, such as the simple act of measurement, disrupts qubits' superposition and entanglement, Japanese researchers from Yamanashi University have discovered that the application of short pulses can prolong a qubit's superposition and entanglement. The bang-bang pulses, as they are known, are applied in either the electric or magnetic fields, depending on the type of quantum machine being used. University of Toronto scientist Kaveh Khodjasteh reported using just one extra qubit for error correction in quantum computing; previous error-correction systems have required much more qubit overhead.
    Click Here to View Full Article

  • "Defender of U.S. Cyberspace"
    SC Magazine (03/04) Vol. 15, No. 3, P. 20; Savage, Marcia

    InfraGard started as an FBI pilot project, but is now a national entity that sharing information between the federal government and private industry, and has some 10,700 volunteer members and 79 chapters. National chair Phyllis Schneck says that InfraGard receives analysis information from the Homeland Security Department and is meeting with Information Analysis and Infrastructure Protection leaders to find the best way for InfraGard to fit within the forming infrastructure protection architecture. She adds that the transfer of the FBI's National Infrastructure Protection Center to the Homeland Security Department has offered InfraGard new opportunities to reach more people and rethink its model. InfraGard works with other agencies as well, and Homeland Security assistant director Patrick Morrissey says that such cooperation and information sharing is a big part of the National Strategy to Secure Cyberspace. "Our membership demographics span all critical infrastructure sectors and all company types and sizes, making InfraGard an excellent information gathering and dissemination mechanism," Schneck says. Observers note that InfraGard draws strength from its local chapters, some of which are very well organized and active, and others which have disintegrated. Members are drawn to the organization out of a sense of patriotic duty. Don Withers, founding president of the Maryland InfraGard chapter, notes that seemingly small incidents may, when put together, reveal a larger picture. InfraGard is seeking funding from both the private and public sectors; the non-profit gets FBI support already, and local chapters get support from private firms, but the organization as a whole is looking for corporate sponsorship.
    Click Here to View Full Article

  • "Where's My Job?"
    Technology Review (04/04) Vol. 107, No. 3, P. 74; Lok, Corie

    Deborah Wince-Smith, president of the Council on Competitiveness, says companies in the United States are not losing their dominance in IT because they are moving jobs overseas. Wince-Smith says high-tech companies such as IBM are often outsourcing their back-office operations, such as customer support and call center positions. Wince-Smith represents a nonpartisan coalition of industrial, academic, and labor leaders that has embarked on a National Innovation Initiative to create a strategy that will keep the United States at the forefront of technological innovation. However, she is concerned about the outsourcing trend because software programmers and electrical engineers are losing their jobs, which could impact the decision of young people who are considering careers in computer science and engineering. Although India has not become a hub for outsourcing advanced technology programming and services, China is becoming a haven for very advanced systems in the area of semiconductor design, engineering, and manufacturing. The long-term impact of such outsourcing could make China, India, and other emerging nations first-tier competitors with the United States in the highest level of economic activity. Wince-Smith has some concerns about the manufacturing of very complex systems such as microprocessors overseas, but they are connected to the innovation process. She believes that the United States will need to continue to encourage young people to pursue math, science, and engineering, as well as create a regulatory environment down to the state level that encourages entrepreneurship and innovation. She says U.S. will need productivity growth to maintain its standard of living and security, and that requires innovation capacity.
    Click Here to View Full Article
    (Access to this article is available to paid subscribers only.)