HomeFeedbackJoinShopSearch
Home

ACM TechNews sponsored by Looking for a NEW vehicle? Discover which ones are right for you from over 250 different makes and models. Your unbiased list of vehicles is based on your preferences and years of consumer input.    Looking for a NEW vehicle? Discover which ones are right for you from over 250 different makes and models. Your unbiased list of vehicles is based on your preferences and years of consumer input.
ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either AutoChoice Advisor or ACM. To send comments, please write to [email protected].
Volume 6, Issue 724:  Monday, November 29, 2004

  • "Synthesizing Human Emotions"
    Baltimore Sun (11/29/04) P. 1A; Stroh, Michael

    Researchers are incorporating emotional capabilities into speech synthesis programs, hoping to enable computers that can communicate emotionally with users through expressive vocal signals such as laughter, sighing, or sad tones of voice. IBM is set to release a new Expressive Text-to-Speech Engine for commercial use that will deliver spoken information in the appropriate tone, and also include lifelike capabilities such as the ability to clear its throat, cough, and pause for breath. AT&T Labs is developing the opposite technology--software that can detect users' emotional state; voice-response systems equipped with this software would be able to prioritize calls according to the person's state of agitation, for example. Columbia University researcher Julia Hirschberg is working on a similar system that would improve tutoring software by allowing computers to respond to students' frustration or boredom. Underlying most of this speech synthesis technology is a "concatenative synthesis" technique that was commercialized in the 1990s and vastly improved speech synthesis programs; the method uses databanks of readers' voices to put words together using very short vocal elements. Japan-based Advanced Telecommunications Research Institute scientist Nick Campbell is trying to improve on that tact by recording about 5,000 hours of people's everyday speech--not actors reading in a studio. He records mundane and important conversations in his subjects' lives in hopes that future computer-generated speech will use more human terms to communicate. Shiva Sundaram at the University of Southern California is also trying to improve on concatenative synthesis by building computer speech from the ground up using mathematical algorithms derived from real human speech.
    Click Here to View Full Article
    (Access to this site is free; however, first-time visitors must register.)

  • "Sprawling Systems Teeter on IT Chaos"
    New Scientist (11/27/04); Graham-Rowe, Duncan

    The linkage of critical European Union IT networks to the Internet, coupled with the increasing complexity of those systems, raises the danger of "emergent behavior" that could result in devastating system failures. The British government is planning to spend 10 million pounds to establish a national center focusing on IT complexity in the hopes that the research conducted there will help avert such a scenario; the facility will be managed by the Engineering and Physical Sciences Research Council. U.K. government chief scientist David King explains that last year's severe power grid outages in the United States and Italy "show that patterns of unexpected and negative behaviors can arise, and when they do they are often disastrous." David Cliff with Hewlett-Packard's Bristol Laboratory and the University of Leeds' Seth Bullock furnished a report that spurred the U.K. government to launch the project. Cliff notes that a computer program's increase in size is accompanied by an exponential increase in debugging difficulty, making it financially prohibitive to fully test the program. The traditional process for building computer systems involves segmenting problems into smaller components and assuming they will function as intended when integrated, but it becomes tougher to predict how these components will interact as they increase in number. A large distributed system's behavior cannot be explained in terms of the sum of its parts, given the mathematics of complexity; this is why emergent behaviors or system failures can be triggered by security threats such as viruses and denial-of-service attacks. The risk of this happening is increasing as all EU government departments, educational systems, and health care services are interconnected via the Internet.
    Click Here to View Full Article

  • "New Technologies Bring the Sense of Touch to Computers"
    Wall Street Journal (11/26/04) P. B1; Brown, Ken

    Haptic technology aims to enhance human/computer interaction by imparting tactile sensations through a combination of physical mechanisms, hardware, and software. Johns Hopkins University professor Allison Okamura says this is a difficult task: "You can display something visually without affecting it, but if you want to display something through touch you have to interact with it, so it makes it inherently more complex," she notes. Increasingly powerful computer systems have given the haptics field a shot in the arm, enabling software to mimic physical sensations such as pressure and resistance. Haptic products on the horizon include Samsung Electronics' planned 2005 rollout of cell phones that vibrate in tune with common ring tones, while BMW is incorporating haptic technology in its Series 7 luxury automobiles so that drivers can control multiple systems--climate, navigation, stereo, etc.--with a single knob equipped with a computer-driven motor that produces a distinctive "feel" for each system. Meanwhile, several science museums have set up the Arm Wrestling Challenge, a system that enables geographically distant participants to remotely engage in arm-wrestling bouts by gripping aluminum arms and hands connected to Internet-linked servers. The sense of touch is essential for numerous industries, and researchers envision a day in which people can "touch" fabrics and other products using a computer. Okamura and other scientists are working on haptic devices that give doctors a tactile sense of a patient's condition as they perform surgical operations.

  • "Dumbing Down a Smartwatch"
    Wired News (11/29/04); Bradbury, Michael

    A team of University of Washington researchers led by Gaetano Borriello has developed a functional prototype of a smartwatch that can help people keep track of easily lost items by using radio frequency identification (RFID) tags. The device is comprised of passive 915 MHz RFID tags, an RFID reader, a reader antennae network, and a personal server, but the researchers intend to reduce the size of the parts so that the personal server and the user interface can be integrated into a single gadget such as a wristwatch or cell phone. Once a tagged item passes the watch's reader, the reader identifies the object and transmits a radio signal to the server, which checks it off the list of items present; if the item is not detected and is part of a group of items programmed to be at a given location, then the watch beeps to inform the user that the item should be retrieved. The data collected on the personal server, which runs Linux and supports communications interfaces for Bluetooth, Wi-Fi, and radio sensor motes, resides locally so no databases can track personal info. "A value-sensitive and privacy-confident design makes sure the information is accessible to the end user and under the user's control at all times," Borriello explains. Cameron Tangney and Waylon Brunette at Borriello's lab are developing the watch's user interface: The interface's display incorporates graphic symbols for commonly forgotten items, and the prototype can detect the presence of tagged objects within a range of five to 10 meters. To run real-world trials of the prototype, the University of Washington plans to convert the Paul G. Allen Center for Computer Science and Engineering into an RFID test center. In addition, Borriello's team is collaborating with Joshua Smith on the Wireless Identification Sensing Project, which seeks to create a tri-axial RFID tag that can detect an item's location, possible intent of use, and even which way the tagged item is pointing.
    Click Here to View Full Article

  • "Blow a Fuse, Computer Chip, and Heal Thyself"
    New York Times (11/25/04) P. E5; Eisenberg, Anne

    IBM researchers have designed computer chips that are fault-tolerant through their ability to blow their own fuses if there is a malfunction and reroute operations to other areas. The chips employ a phenomenon known as electromigration to burn or open fuses, and also use algorithms and processors preprogammed with repair instructions. Northeastern University professor Miriam Leeser says the on-chip fuses' primary advantage is to help enable self-managing electronic systems: "Basically, with this technology, you can build self-repairing circuits," she notes. It is also possible that people could blow the chips' fuses wirelessly, to disable a vehicle fleeing from a crime scene, for instance. Leeser remarks that circuit rerouting through electromigration has never been considered up to now because the metal atoms in a conducting wire may start moving if an electric current is run in one direction, causing material gaps that can lead to reliability problems. "[IBM] used a high current and high temperature to speed up electromigration for a short time and cause the fuse to work," she points out. Subramanian Iyer of the IBM Semiconductor Research and Development Center says the new fuses allow the chips to be customized after fabrication, while IBM materials physicist Chandresekharan Kothandaraman points out that chips can also be remotely adjusted in the field with the fuses. Cornell University computer science professor Kenneth Birman notes that autonomic or self-managing computer systems will be vital as such systems' operational and maintenance costs become too expensive. Birman says IBM's new technology is only part of the answer to creating autonomic solutions. He says, "The problem of management of computer systems has grown incrementally. Maybe the solution will be incremental, too."
    Click Here to View Full Article
    (Articles published within 7 days can be accessed free of charge on this site. After 7 days, a pay-per-article option is available. First-time visitors will need to register.)

  • "Text With an Edge"
    Pittsburgh Post-Gazette (11/29/04); Spice, Byron

    Solving the problems people with motor and situational impairments face when writing text on a personal digital assistant (PDA) or handheld is one of the goals of EdgeWrite, a technology from Carnegie Mellon University's Human-Computer Interaction Institute that incorporates a square overlay into the handheld to guide a stylus over the touchscreen. Inputting text with PDAs is difficult for people whose hands jitter either from degenerative conditions such as Parkinson's or situational conditions such as being in a moving vehicle, and EdgeWrite's designers developed an alphabet that could be produced by straight lines, either along the square overlay's edges or diagonally across the cutout. EdgeWrite recognizes letters according to the sequence of corners that are hit when forming them, rather than the shape of the strokes, so users do not have to draw precisely straight lines. EdgeWrite can also be used with other devices, such as videogame or wheelchair joysticks; in fact, EdgeWrite co-developer and CMU graduate student Jacob Wobbrock tested a system adapted to a commercial wheelchair joystick for United Cerebral Palsy clients last spring. A newer version of the system replaces the stylus and overlay with a series of raised bumps on a finger touchpad, enabling users to enter letters by rubbing their fingertips over the bumps in the proper sequence. Wobbrock also envisions EdgeWrite being adapted for MP3 players and wristwatch computers. He reports that users can learn to use the system about as rapidly as it takes to learn the Palm Pilot's Graffiti system, though he admits that the input speed needs to be improved.
    Click Here to View Full Article

  • "Striking Up Digital Video Search"
    CNet (11/29/04); Olsen, Stefanie

    Google, Microsoft, and Yahoo! are developing Web multimedia search applications that will leverage the Internet's growing popularity as a media aggregator. Search capability is the glue that melds different broadcast and print media platforms, and Google is working on an ultra-secret multimedia scheme that not only involves patent applications for a "method to search media," but also potential tie-ups with major cable networks and TV broadcasters. Sources say Google is currently working with TV media companies to hammer out a business model as well as ways to deal with the myriad copyright issues surrounding broadcast archives; "Google's trying to bring TV to the Web the same way they're bringing books to the Web," according to an anonymous media executive familiar with the Google plan. Microsoft, meanwhile, is planning to lock in users' entertainment search with technology built into its Microsoft Media Center PC software. That technology will allow people to use interactive TV technology to search for video clips sourced from broadcast archives, the Internet, and video-on-demand networks; Microsoft is set to reveal the technology at the Consumer Electronics Show in January, according to sources. Other online players are also interested in becoming portals to TV and video clips. America Online bought audio-search firm Singingfish earlier this year, and Yahoo! plans a less ambitious cooperative effort with broadcasters to archive video content that would be presented along with TV-like advertisements. Experts say search technology needs to be much more sophisticated for it to be used with multimedia files. Google's technology, for example, reportedly involves thumbnail pictures coupled with captions that lead to a page similar to a film reel offering a series of still images and associated text; clicking on a still image brings up a multimedia file.
    Click Here to View Full Article

  • "Pictures Tell Their Own Tale, Computers Narrate Them"
    Star of Mysore (11/28/04) Vol. 27, No. 278

    In a Nov. 27 lecture hosted by the Computer Society of India's Mysore chapter and the Institution of Engineers' Mysore Center, University of Mysore computer science professor Dr. D.S. Guru detailed recent breakthroughs in image processing and its possible applications. He reported that there are two major goals for image processing--the improvement of visual quality and the analysis of the image's contents for autonomous image perception. Guru deconstructed image processing into a series of steps: Image acquisition, pre-processing, knowledge base creation, segmentation into constituent elements, representation, description, recognition, and interpretation. He singled out machine learning as the key operation in image processing, constituting the crux of machine perception. Guru noted that insufficient pixels make it impossible to translate the various objects represented in a low-contrast image; the solution is image enhancement, which employs image processing to generate high contrast. The lecturer detailed a series of image processing applications, including reliable, cost-effective quality assurance and defect identification. In the civil engineering sector, image processing can be used to facilitate automated tile inspection at faster-than-manual speeds, while image binarization can help automate x-ray image analysis. Furthermore, image processing can be employed to recognize license plates to ensure that only authorized vehicles are allowed in secure areas; automate the movement of robots without being impeded by obstacles en route; find specific images in old letters via exact match and similarity match retrieval; scan newspapers automatically for matters of interest to specific readers; enable a keyboard-free computer interface; and recognize sign language.

  • "New Decision Software Hailed 'Internationally Leading'"
    EurekAlert (11/23/04)

    The Engineering and Physical Sciences Research Council (EPSRC) says new decision assistance software developed by researchers at the University of Manchester is an international leader in its potential impact on science and the greater society. EPSRC funded the research into the methodology of the computer program, which is designed to assist in the making of intelligent decisions. The decision software is seen as having applications in a wide range of fields, from measuring the performance of organizations to comparing the productivity of nations. Professor Jian-Bo Yang, who led the team that developed the software's methodology, says indicators such as price, reliability, performance, and fuel economy are considered when a car is being purchased. "This program can help you make a decision based on judgments as well as statistics, so if you're rich and price is not that important to you but reliability is, it will weigh these factors into the equation," says Yang. For companies and organizations that must justify their decisions, the software is a key development. Current statistical analysis programs are limited, maintains Yang. "This software is able to make use of such judgmental information in the decision-making process--that is what makes it unique," says Yang.
    Click Here to View Full Article

  • "Toward a More Human Robot"
    Business Week (11/24/04); Edwards, Cliff

    Takeo Kanade, former director of Carnegie Mellon University's Robotics Institute and part-time director of Advanced Industrial Science & Technology's Digital Human Research Center in Japan, believes the key stumbling block in the development of robotics and artificial intelligence is a lack of knowledge about ourselves and how we function. He contends that such an understanding could be applied to the design of more intelligent and user-friendly AI systems. Kanade subscribes to the notion of invisible or environmental robotics; one such application of this concept is "virtualized reality," the 3D reconstruction of the actions transpiring within an environment monitored by numerous cameras. Kanade muses that the wider application of virtualized reality could give an entirely new meaning to historical re-creations and event documentation, for example. He says the U.S.'s problems in sustaining innovation in the field of robotics stem from experts' tendency to restrict their thinking based on what they believe to be difficult. "We, who are supposed to come up with the better idea, need to think as if we're a complete novice or amateur and do with the abilities we have," Kanade argues. He also says an overemphasis on results could be eroding U.S. investors' "playfulness," or willingness to invest in blue-sky research. In regards to the difficulty the country has attracting students, Kanade cites Rensselaer Polytechnic Institute President Shirley Jackson's conclusion that the problem is the result of more restrictive visa policies, aging workers, and a population shift toward so-called minorities. Among the fields Kanade thinks are overdue for innovation are biotech, bionics, and technology that addresses quality-of-life issues as the global population ages.
    Click Here to View Full Article

  • "California Consumers Set to Pay as State Tackles E-Waste Problem"
    Investor's Business Daily (11/24/04) P. A4; Riley, Sheila

    Consumers in California will be required to pay a $6 to $10 e-waste recycling fee starting Jan. 1, in accordance with state legislation. Other states have introduced or are debating e-waste bills, although electronics manufacturers and vendors are concerned that such developments will elevate prices and bog them down in administration; an EPA representative says they do not wish to contend with disparate state financing rules for recycling. California consumers have few options for handling obsolete electronics: Their choices are to mothball the equipment or to have it picked up as garbage, passing disposal responsibilities onto local jurisdictions, whose recourses are limited by a state ban on hazardous materials in landfills. The new law, however, allows local governments or private companies to establish e-waste collection points, from which recyclers would take charge. The legislation also promises significantly higher revenues for recycling firms. Sheila Davis with the Silicon Valley Toxics Coalition's Clean Computer Campaign thinks recycling costs should be embedded within the product rather than kept separate, arguing that it will motivate manufacturers to redesign their products. She also notes that take-back requirements mandated by the European Union could easily be adopted by the U.S. government. Dell's Bryant Hilton says his company would prefer that the government stay uninvolved in e-waste management, arguing that "industry- and manufacturer-led and voluntary efforts are probably going to do a lot more than government-mandated ones."

  • "University Leads Data Mine Plan"
    Australian IT (11/23/04); Foreshew, Jennifer

    The University of Technology Sydney has secured $19.26 million of in-kind support for a $38 million data mining center, and expects to hear from the Australian government next year on whether it will receive $12.8 million over a period of five years. Professor Tharam Dillon, dean of the Information Technology faculty, says the center will focus on helping solve deep data mining and business processing problems, which could help improve decision-making in areas including marketing, finance, and health. The center will also help industry and government better understand data mining and apply knowledge discovery to improve productivity. "At the moment, there is a disconnect between technology people and those doing data mining and this will help to bring them together and ensure there is a flow of information, techniques, and concepts," says Dillon. Many of the public and private organizations that are supporting the project are providing databases for research and training, which gives the center the ability to perform multi-database mining on sizeable multimedia datasets. The center will focus on business modeling and mining, including customer behavior analysis, modeling market processes, and learning in a market environment.
    Click Here to View Full Article

  • "NARA Conference Demonstrates Emulation Technologies"
    Government Computer News (11/18/04); Jackson, Joab

    The recent National Archives and Records Administration symposium included demonstrations of technologies promising that outdated format and file systems would be read by future electronic devices. One technology on display was a program called Multivalent, software that can serve as the foundation for a universal document viewer. The Java-based program, developed by University of California, Berkeley professor of computer science Robert Wilensky and his team, is capable of displaying or modifying any document, regardless of format. The empty shell program can hold modules to read formats, and adapters can be written to enable the platform to read new formats. Multivalent was developed under Digital Libraries Initiative Phase II award, a $1 million National Science Foundation program. Meanwhile, James Myers, a chief scientist with the Energy Department at the Pacific Northwest National Laboratory, demonstrated the Data Format Description Language (DFDL), an XML-formatted specification designed to detail how to pull data from a binary file without help from the program that formatted the file. The DFDL description can include archived documents as attachments, which a software parser can read and then pull the desired information from the file. The description can be used to pull all information from a document, or only certain fields.
    Click Here to View Full Article

  • "Cybersecurity and the Question of Leadership"
    CNet (11/18/04); Cochetti, Roger

    The recent departure of Amit Yoran from the Department of Homeland Security's cybersecurity post has stirred controversy that the federal government is not giving enough clout to the person in that position, writes CompTIA's U.S. public policy director Roger Cochetti. He says it is important to remember that the position of cybersecurity czar was considered well out of the mainstream before the Sept. 11 attacks, but now has the attention of not only the federal government, but also of Congress and the media. The recent succession of cybersecurity czars--Clarke, Schmidt, Yoran, and now Andy Purdy--has succeeded in raising the profile of cybersecurity and generating critical links between government agencies and private industry. The stage is now set to confront a growing array of cybersecurity threats, including technical threats such as viruses, worms, and spam--and tactical threats such as hackers and state-sponsored military hackers whose job is to steal intelligence and create havoc in Western economies. The most important aspect of the cybersecurity position is the leadership provided to a multitude of government bodies and private-sector firms. The cybersecurity czar should continue to resist the impulse for regulation--which often is a poor solution for IT problems--and work to press local and federal governments to make their systems more secure, thereby setting a good example for private industry. Meanwhile, IT vendors must improve the security of their products, best practices should be honed, and the federal cybersecurity infrastructure continually improved.
    Click Here to View Full Article

  • "World V Web"
    Economist (11/20/04) Vol. 373, No. 8402, P. 65

    The 40 appointed delegates of the United Nations Working Group on Internet Governance met for the first time this week to discuss the definition of Internet governance, the role of government and international organizations in the administration of the Internet, and other issues ranging from cyber-crime to the cost of bandwidth. The establishment of the working group, which is made up of government representatives and members of "civil society," reflects the dissatisfaction many countries have with the current state of Internet governance under ICANN, a private, non-profit organization with ties to the U.S. government. Many countries have expressed concern over ICANN's industry-led rule, noting that even when it breaks all formal ties with the U.S. government in 2006, it is still likely to be dominated by U.S. interests. Business leaders, however, favor the private-sector organization to a United Nations-led alternative, since the bureaucracy and politics of the latter could hinder the pace of innovation. The debate is likely to remain heated, as there is much at stake in the governance of the Internet, including control over the billion-dollar domain name registration business. For now, the United States is officially expressing its support of the UN working group's efforts, while keeping its distance by declining to take part.

  • "Standards: High-Stakes Game"
    Computerworld (11/22/04) Vol. 32, No. 47, P. 25; Mitchell, Robert L.

    Big business is playing a larger role in the technology standards-setting process as the market for IT products becomes larger and standards more influential. Increasingly, the goal of vendors participating in the standards process is not to establish a standard, but to inject some proprietary technology they can generate revenue from. "You now have compromises that are not just mathematical compromises or technical compromises but have major marketing compromises behind them," says IEEE Standards Association President Jim Carlo. At the same time, technology buyers are pressing for smoother interoperability, which is leading to more rigorous testing and verification in the standards process, notes MasterCard International engineering services vice president Jim Hull. Standards setting is also now done by a wider variety of groups other than those vetted by the American National Standards Institute, including vendor-sponsored efforts that are meant to push standardization along faster; sometimes those efforts fail, as with Opsware's DCML.org consortium that was recently folded into the Organization for the Advancement of Structured Information Standards for lack of industry support. But though more vendors may be jockeying for position in the standards process, user input is more important than ever to add outside viewpoints, says Internet Engineering Task Force Chairman Harald Alvestrand. IEEE discussion on the ultrawideband (UWB) specification has been deadlocked for two years because of irreconcilable vendor interests; groups led by Intel and Motorola's Freescale Semiconductor have both invested significant resources into their respective technologies and are not able to make compromises. Fibre Channel Industry Association Speed Forum Chairman Skip Jones says it is best to let the marketplace sort out differences such as the battle over UWB.
    Click Here to View Full Article

  • "Why WiMax?"
    Technology Review (11/04) Vol. 107, No. 9, P. 20; Roush, Wade

    The forthcoming Worldwide Interoperability for Microwave Access (WiMax) metropolitan-area wireless communication standard is expected to put Wi-Fi in the shade. Wi-Fi can transmit signals across up to 100 meters indoors and 400 meters outdoors, but WiMax boasts a maximum transmission range of 50 kilometers at a peak data transfer rate of 70 Mbps. Furthermore, once industry consensus is reached on such details as WiMax data encryption, frequency allowances, and multiple-user frequency access, companies will be able to mass-produce WiMax-enabled chips and make WiMax receivers affordable to consumers; the end result could be the replacement of current ISPs with broadband Internet connectivity. WiMax promises to facilitate wireless communication for new small and mid-sized businesses, the construction of mobile-computing hot spots in areas that lack phone lines, and the expansion of broadband Internet access to impoverished regions. The instigator of the WiMax movement is Intel, which saw a need for Wi-Fi to develop into a carrier-like technology as well as use more, as-yet untapped frequencies. In addition to designing communications processors to exploit these frequencies and delivering the chips as samples to manufacturers, Intel is promoting the WiMax Forum as an industry organization for certifying WiMax-compliant equipment, and is making investments designed to demonstrate WiMax's profit potential through Intel Capital. The high cost of building a WiMax transmitter network could complicate the technology's rollout. In addition, WiMax equipment manufacturers must address the challenge of achieving the economies of scale necessary for enabling WiMax hardware in the consumer price range.

  • "Coping With Human Error in IT Systems"
    Queue (11/04) Vol. 2, No. 8, P. 34; Brown, Aaron B.

    Those responsible for designing, building, deploying, and managing IT infrastructures must incorporate safeguards to mitigate the consequences of human error, which is unavoidable and can lead to corporate imbalance, communication disruptions, and financial unease if left unchecked, writes Aaron B. Brown, a research staff member in IBM Research's Adaptive Systems department. Brown says human error coping mechanisms fall into four general categories--error prevention, spatial replication, temporal replication, and temporal replication with re-execution--and systems that use several of these strategies are often the most error-tolerant. The two subcategories of error prevention, error avoidance and error interception, operate on the principle that types of potential errors must be anticipated. Avoidance--usually facilitated via user interface design or training--is less effective than interception, but the latter technique does not work in scenarios where the system's state changes rapidly. Spatial replication, which involves creating multiple copies of a system that each support synchronized replicas of the system's key data, can take up the slack when error prevention fails; but the approach is ineffective against critical system-wide operational errors, since mistakes that affect the majority of the copies are erroneously interpreted as the correct system state. Systems with temporal replication can recover from errors by implementing historical replicas representing past states, although any recent data will be irretrievably lost. Temporal replication with re-execution deals with state-affecting errors without losing recent data by keeping a separate history log that the system can reference to update the historical model, although this approach is tough to deploy appropriately, in addition to being time-consuming and resource-intensive. An ideal system for coping with human error combines several defensive layers, including error avoidance and error interception, spatial replication, and temporal replication with re-execution as a last resort.

 
    [ Archives ]  [ Home ]

 
HOME || ABOUT ACM || MEMBERSHIP || PUBLICATIONS || SPECIAL INTEREST GROUPS (SIGs) || EDUCATION || EVENTS & CONFERENCES || AWARDS || CHAPTERS || COMPUTING & PUBLIC POLICY || PRESSROOM