HomeFeedbackJoinShopSearch
Home

ACM TechNews sponsored by Looking for a NEW vehicle? Discover which ones are right for you from over 250 different makes and models. Your unbiased list of vehicles is based on your preferences and years of consumer input.    Looking for a NEW vehicle? Discover which ones are right for you from over 250 different makes and models. Your unbiased list of vehicles is based on your preferences and years of consumer input.
ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either AutoChoice Advisor or ACM. To send comments, please write to [email protected].
Volume 7, Issue 786:  Monday, May 2, 2005

  • "Foreign-Student Enrollment Declines"
    SiliconValley.com (05/01/05); Schoenberger, Karl

    Foreign student enrollment at U.S. institutions experienced a 2.4 percent drop in the 2003-2004 academic year, with Chinese enrollment declining for the first time by 2.6 percent. On the other hand, the number of Indian students is on the rise, surpassing the portion of Chinese students. These trends are reflected in foreign student enrollments at Silicon Valley universities: Stanford's Indian student population has grown 23 percent in the last two years, to total 461 in the current academic year; the number of Chinese students dipped from 545 to 543 in the same period. Peggy Bloomenthal of the Institute of International Education in New York attributes the increase in Indian students to several factors, including the New Delhi government's loosening of restrictions on capital flows, and more importantly, fewer problems with security checks on visa applications in the wake of Sept. 11, 2001. Stanford computer science master's student Aditya Krishnan says setting up an appointment at the U.S. Embassy or a consulate for a visa application interview can take months, but the processing of the visa is quick in comparison. Stories of long delays are discouraging Chinese students from applying for U.S. visas even though the U.S. State Department instituted reforms to speed up processing. Moreover, the Chinese government has taken steps to increase domestic master's and doctorate degree grads in recent years by opening new schools, while American institutions' establishment of China-based campuses offers Chinese students a less expensive opportunity for earning degrees than going abroad. Some American educators and tech industry leaders warn that the decrease of Chinese student enrollments in U.S. schools could cut the nation off from a vital source of brainpower and tech industry labor that America needs to continue to compete in the global economy.
    Click Here to View Full Article

  • "How 'Search' Is Redefining the Web--And Our Lives"
    Seattle Times (05/01/05); Peterson, Kim

    Web search engines have become the center of people's online activity, with about 120 million U.S. Internet users searching an average of 38 times per month. The popularity of Web search has dramatically influenced numerous markets, serving as an affordable advertising model for small businesses and generating $4 billion in revenues last year. As search becomes more important to business, search companies are building more detailed and useful profiles of individuals based on their search activity; moreover, search technology is still in its infancy, having only been in widespread use for several years. Experts say today's search approaches will seem rudimentary compared to what will be available years from now. The first search engine was created by a Montreal college student in 1990 to comb public file-exchange sites; a MIT student created the first software robot to crawl the Web three years later, and Stanford University students launched the Excite and Yahoo! Web directories soon after. Perhaps the most important development for Web search was Idealab creator Bill Gross' GoTo search venture, which did not offer outstanding search technology but an innovative advertising model instead. That relevancy-based advertising model, combined with excellent search technology and a clean interface, helped Google pull in more than $1 billion in quarterly revenue last year. Search companies are adding content to their offerings, and experts predict search will soon debut on cellular and TV platforms; Wired magazine co-founder John Battelle says search companies and TV firms have a shared interest in combining resources.
    Click Here to View Full Article

  • "Picture This--Automatic Image Categorization"
    IST Results (05/02/05)

    The IST-funded LAVA project has found a way to automatically categorize or classify visual images without the use of additional metadata by bringing together researchers in the fields of machine-learning and computer vision and cognitive science, according to LAVA participant Gabriela Csurka with Xerox Research Center Europe. "We began our approach by grouping together similar types of objects...and trying to find a way of categorizing those that were common to a group," she explains. "We applied machine-learning techniques to find the distinctions between images by focusing on sections of images that were similar--sections that were common to other images with the same content." Another challenge for the LAVA team was to find a way to accurately categorize image content in spite of views of objects taken from different distances or perspectives. The researchers have successfully devised a technique for capturing visual images and automatically identifying the proper category those images belong to, and they think this advance has dramatically augmented the ability to create reliable vision-based detectors for common events and objects. The LAVA technologies' potential applications include image browsing within documents, consumer photo archival and image management, Web-based image searches, human-computer interaction, video surveillance, robotics, and medical imaging. The LAVA team won 14 out of 18 detection, localization, and classification contests in the PASCAL network's Visual Object Classes Challenge to amass a standardized group of object recognition databases and supply a common set of tools for accessing and managing annotations in those databases.
    Click Here to View Full Article

  • "CAREER Researchers Merge Game Theory With Wireless Networks, Create 'Smarter' Garments"
    Virginia Tech News (04/25/05); Crumbley, Liz

    The National Science Foundation has awarded two five-year, $400,000 Early Career Development Program (CAREER) awards to Virginia Tech professors Allen MacKenzie and Tom Martin. MacKenzie's award is funding a project to apply game theory to the creation of an analytical framework for wireless communications networks so that network-common goals can be more effectively reached. Such a breakthrough could help bolster wireless networks against interference and facilitate more efficient power control, as well as supply analytical tools to other researchers attempting to build more powerful networks. "Game theory is an analysis tool that has been widely used to explain complex economic systems," notes MacKenzie, adding that wireless networks are complex systems whose dynamic interactions complicate the analysis and prediction of system performance. Martin's CAREER project focuses on improving the design of electronic textiles, specifically by making e-textiles aware of their shape, the wearer's movements, and the locations of the sensing components. In addition, Martin is collaborating with the Dan River textile manufacturer to see if e-textiles can be fabricated by traditional methods for the purpose of revitalizing the Southside Virginia textile industry. Meanwhile, Martin and fellow Virginia Tech professor Mark Jones are working with the Locomotion Research Laboratory to study human gait with e-textiles. Both CAREER projects feature an educational element: MacKenzie's contribution consists of university educational modules delivered by wireless communications, while Martin's is a learning module for computer engineering that applies inductive learning to make students more skilled in hardware and software debugging.
    Click Here to View Full Article

  • "University Professors Use Brain Research to Make 'Smart' Technology"
    Daily Illini (05/02/05); Krenn, Janet

    University of Illinois at Urbana-Champaign professors have applied brain research to the development of a self-aiming video camera that captures images through infrared and motion detection. Professor of neurological sciences Thomas Anastasio, electrical and computer engineering professor Thomas Huang, and computer science professor Sylvian Ray spent five years developing the camera, which is based on mathematical models of the mammalian brain's superior colliculus region in charge of subconscious reflex-like behavior in response to things in the environment. Ray devised a software program to make the camera pinpoint and follow people in an environment using Anastasio's models of how the superior colliculus applies data. The camera initially was capable of only horizontal rotation and motion detection based on visual and audio information, but the device was upgraded to rotate vertically as well as horizontally, and to detect moving objects more accurately with heat-sensitive infrared and a face recognition program developed four years ago by Compaq researchers Paul Viola and Michael Jones. The program ascertains whether an object could be a face by analyzing dark, shadowy areas that abut brighter areas of the same size. Although the program is very accurate, Viola cautions that it really recognizes face-like patterns of shadows rather than faces, while the determining factor for pinpointing faces--the shadows on the eyes--cannot be detected unless the person is facing the camera. Anastasio says the upgraded camera could be used for video surveillance, and as a testing tool for neural functions such as memory, cognition, or image selection. "The robot, which was designed on the basis of our ideas about the brain, could actually help us test new ideas about brain function," he concludes.
    Click Here to View Full Article

  • "Quantum Leap to Secure Web Video"
    BBC News (04/29/05)

    Toshiba's Quantum Key Server system delivers an unbreakable quantum encryption scheme for voice and video files streamed over the Internet, although Dr. Andrew Shields of Toshiba's Cambridge labs says secure quantum cryptography applications are several years off. The technology enables each frame in a video file to be encrypted using separate keys, so that decrypting one frame would be worthless without the decryption of all the other frames. Quantum cryptography guarantees that any attempt to intercept the keys en route will be obvious, as the very act of looking at a quantum particle changes its state. The Quantum Key Server technology enables keys to be refreshed often, thus thwarting thieves. The self-managing system can function around the clock, which Shields says is the technology's major breakthrough. He also says the system could be extended to securely encrypt other high-bandwidth files, as well as supply connections between separate corporate sites to facilitate secure file transmissions via fiber-optic networks. Toshiba says government and financial institutions were suitably impressed by demonstrations of the technology, and now the project is concentrating on extending the system's physical distance of operation. The teleportation of photons over a distance of 600 meters was successfully carried out last August, and Shields believes repeaters could allow transmissions to be sent across even longer distances.
    Click Here to View Full Article

  • "Ants Help Researchers Control Systems"
    Electronics Weekly (UK) (04/28/05); Yeates, Harry

    Researchers at the Universities of York, Kent, and Surrey in the United Kingdom are using simulations, cellular automata models, and formal mathematical models to study how large numbers of relatively simple agents interrelate to engage in complex behaviors, such as an ant colony. As part of their work involving so-called emergent properties, the researchers expect to find predictable and useful patterns, which could lead to the manipulation of such systems. A computer device could be the result of their two-year EPSRC feasibility study called TUNA. The researchers are testing their ideas about modeling and simulating emergent activity by demonstrating a system of artificial blood platelets, which acts as a system of nanoscale agents that produces the macroscopic effect of stopping bleeding. "We're coming to some interesting ideas about how you might both model and reason about emergent properties, and having different layers of description," says Susan Stepney, professor of computer science at York.
    Click Here to View Full Article

  • "Grid Computing Meets Data Flow Challenge"
    Register (UK) (04/27/05)

    Scientists working on the CERN computer grid effort have successfully sustained the highest-capacity, continuous data flow ever between multiple sites. The researchers sent an average flow of 600 Mbps between eight sites worldwide for a period of 10 days without interruption for a cumulative distribution of 500 terabytes. The experiment is the first of a series of four service challenges for the CERN network, which is being constructed to handle the massive data output of the Large Hadron Collider (LHC) set to come online in 2007. "When the LHC starts operating in 2007, it will be the most data-intensive physics instrument on the planet, producing more than 1,500 megabytes of data every second for over a decade," said CERN service challenges manager Jamie Shiers. The CERN grid will be the largest grid network in the world, used by more than 6,000 scientists working on the LHC experiments. Fermilab computing division head Vicky White, whose group was one of the eight institutions participating in the recent test, said physics researchers have long sent large amounts of data across distances, but the new effort was unprecedented in that it involved multiple partners and a sustained data flow. The next service challenge will take place this summer, extend to more nodes, and aim to keep data continuously flowing for a three-month period.
    Click Here to View Full Article

  • "NASA Center Develops 'Hot' New Technology"
    Diamondback (04/27/05); Howell Jr., Tom

    The University of Maryland's Center for Satellite and Hybrid Communications Networks (CSHCN) is at the forefront of sensor technology. CSHCN receives $8.6 million annually to research wireless satellite networking for NASA and the Defense Department, and has created a sensor the size of two AA batteries that can be easily deployed on the battlefield or by future moon exploration teams. Electrical engineering professor and center director John Baras says the sensor could be thrown onto the battlefield and relay situational information back to laptop-equipped soldiers, or used in space missions to communicate environmental details or astronauts' health data to mission control. The center is currently developing an even smaller version the size of a quarter. CSHCN researchers have also created secure wireless networking setups where connected computers monitor each other's performance and can detect anomalies. Such systems could improve security on current wireless networks, which leave data open to hackers. Baras says other work done at CSHCN involves devising algorithms based on phenomena observed in nature, such as how insects interact or how particles work together.
    Click Here to View Full Article

  • "Geocollaboration Using Peer-Peer GIS"
    Directions (04/26/05); Judd, Damon D.

    As with other areas of general computing, geospatial information systems (GIS) technology is moving toward collaborative environments where either co-located or distributed teams can share information asynchronously or in real time. New "geocollaboration" technology is also getting a boost from new architectures such as peer-to-peer, which facilitates resource sharing among heterogeneous systems. Collaboration itself is an ill-defined term, and in the context of computing can mean open communities with less interaction, such as Wikipedia, or more tightly focused groups. Penn State University researchers are investigating geocollaboration and helping to define its basic parameters. Applications that enable geocollaboration for natural disaster or terrorist attack response are especially popular now. Software makers have recognized the need for more sophisticated computer-supported cooperative work and have developed technologies such as .NET and JXTA with distributed systems in mind; XML and peer-to-peer are other enabling technologies. Public geocollaboration efforts include the Canadian GeoConnections project for making that country's geospatial information widely available online, and the Geospatial Collaboration and Awareness System (GeoCAS) component of the Pacific Disaster Center's Integrated Decision Support System, which provides data-sharing and decision-support capabilities to humanitarian efforts.
    Click Here to View Full Article

  • "The Geography of Internet Addressing"
    CircleID (04/25/05); Wilson, Paul

    The ITU-T proposal for a new system in which countries receive and manage separate IPv6 allocations could lead to excessive fragmentation in the routing space, writes APNIC director general Paul Wilson. Figuratively, the Internet landscape consists of a geography of interconnected networks, but the ITU is proposing country-based IP address allocations that would lead to more self-determination for countries. Nations have borders in place, but the Internet is known for its flexibility and utility, with every point on the network being open to every other point, even though there are downsides such as spam and hacking. The technical risks of realigning the Internet on national boundaries include the development of up to 200 different policy regimes, too much consumption and subdivision of address space, and a large number of additional address prefixes within the IPv6 routing tables. Such a system may even demand more involvement at the national level, such as the creation of a management system for address space, infrastructure support for carrying and managing Internet traffic, and inter-provider settlement schemes. The global phone network has such a system with hundreds of providers, but the Internet would demand hundreds of millions of bilateral agreement for its tens of thousands of network operators. The true benefit of such a system must be weighed, along with the potential cost, management demands, and degradation of the Internet as a cohesive network.
    Click Here to View Full Article

  • "Skeletons on Your Hard Drive"
    CNet (04/20/05); Hines, Matt

    Experts say it is inordinately difficult to completely erase data on unwanted hard drives, even using commercial wiping software to overwrite the data. The National Association for Information Destruction (NAID) said it could not endorse the use of wiping software alone because studies have shown such software is not enough to ensure data deletion. Instead, the group says users should use wiping software in addition to material destruction to make sure hackers cannot pull sensitive information off of the drives, such as login data. NAID executive director Bob Johnson also says professional services that claim to wipe large numbers of computer hard drives for organizations lack adequate testing measures to check if data is inaccessible. Studies have shown the majority of resold hard drives still contain some information. The U.S. Department of Defense requires seven passes with wiping software for hard drives that do not require physical destruction, says Acronis' director Stephen Lawton, whose company sells such software. Only one pass is not enough even for home users, he says. Stronger protection is afforded through crushing services or degaussing, which is a magnetic striping process usually applied to large collections of machines. Hewlett-Packard's John Frey says the reason PC data is difficult to erase is because hardware and software makers had to ensure users did not accidentally delete information during the DOS era.
    Click Here to View Full Article

  • "Security Stalls Mobile Multimedia"
    EE Times (04/25/05) No. 1368, P. 45; Yoshida, Junko

    The technology is available to enable mobile multimedia such as downloaded MP3 files, movie clips, video games, and streaming TV broadcasts, but the lack of security is hampering the market, according to experts. Enabling more network-enabled multimedia features on mobile phones requires digital rights management (DRM) frameworks, reliable software to prevent systems from crashing, and protections against viruses and other hacker activity. Mobile technology vendors are split into three camps concerning mobile multimedia security: SIM card vendors that want to see security implemented on SIM cards; chip vendors that advocate a hardware-centric approach with DRM on the baseband or applications processor and a hardwired security engine; and other chip makers that propose combined software and hardware, processor-based DRM solutions. Infineon's Dominik Bilo says there will not be any single security implementation on future mobile phones, but rather a number of competing solutions. DRM today is often implemented in the baseband processor or coprocessor, but Texas Instruments' Jay Srage warns that software-based security on a processor is easily hacked, and recommends that key-management systems be hardwired and usage rules dealt with in software. Phones with hardwired encryption engines benefit from faster security functions and lower power requirements, says Philips Software's Bart Van Rijnsoever. SIM card vendors, meanwhile, say larger SIM cards would allow network operators to easily update DRM as well as provide for device and multimedia portability. Critics of the SIM card approach note that security is vulnerable at the interface; the European Telecommunications Standards Institute is working on a standardized secure interface.
    Click Here to View Full Article

  • "Avoid the Rush: Worry About 2006 Elections Now"
    Government Computer News (04/25/05) Vol. 24, No. 9; Jackson, William

    Concerns over the reliability and security of electronic voting have prompted the federal government to get involved in the inner workings of casting ballots and counting votes for the first time. In April, the National Institute of Standards and Technology (NIST) is scheduled to draft federal guidelines for voting systems, as part of the Help America Vote Act. The law was passed following the controversy surrounding the 2000 presidential election. Opponents of electronic voting systems point out that even the least sophisticated technology has flaws and can be attacked, and add that a paper trail would help facilitate recounts. In May, NIST will begin a one-year study on the voting systems used in 2004, and set benchmarks for the next generation of guidelines. Although they will be voluntary, the guidelines are a key first step toward standards that would eventually be adopted by states.
    Click Here to View Full Article

  • "Trickle-Down Business Intelligence"
    InfoWorld (04/25/05) Vol. 27, No. 17, P. 47; Gincel, Richard

    Business intelligence (BI) is penetrating the enterprise more deeply as the data captured by various BI applications trickles down from the executives and financial managers to the lower echelons. "It's a shift toward directed BI, where you're guiding people through decisions," says Gartner analyst Bill Gassman. The InfoWorld Business Intelligence Report 2005 estimates that 45 percent of surveyed companies use BI solutions as decision guidance tools for workers, while 32 percent cite prepackaged integration with existing enterprise applications as the most critical element of their companies' BI solutions. In addition, 70 percent of respondents intend to extend the availability of BI solutions to more employees in the next year. "The major trend for BI [applications] is to drive select information to individuals when they need it, be sure that it's already relevant to what they have to accomplish, and then guide them toward what action to take," says Siebel Business Analytics' Paul Rodwick. Work cultures throughout the enterprise are changing as BI broadens its scope and provides the timely delivery of information to frontline workers, usually via customized dashboards, according to industrial observers. Forrester Research analyst Keith Gile authored a recent report on best BI application development practices, in which he says IT should partition technical functionality (OLAP, visualization, or predictive analytics, for instance) from a BI application's functional necessities (forecasting, supply-chain optimization, customer segmentation, and so on). Targeting the appropriate organizational tier thus becomes easier.
    Click Here to View Full Article

  • "Sensor Data Are Spatial Data"
    GeoWorld (04/05) Vol. 18, No. 4, P. 22; Reichardt, Mark

    Open Geospatial Consortium (OGC) President Mark Reichardt writes that all sensor data constitute spatial data because every sensor has a physical location, and this reasoning is a core tenet of OGC's Sensor Web Enablement (SWE) effort. SWE is a key element of the OWS-3 Interoperability Initiative for advancing OpenGIS Specifications through "hands-on" prototyping and testing. SWE's objective is to facilitate interoperable access to distributed, dissimilar sensors and sensor networks so that applications to discover, access, and combine sensor data from a wide variety of technologies and databases can be implemented. Reichardt says the publication of standardized descriptions of sensor capabilities, location, interfaces, and observations can be effected through XML-based text schemas, which Web brokers, clients, and servers can employ to facilitate automated Web-based discovery of sensors' presence as well as the evaluation of their properties. The XML schema provides sensor control interface information that enables communication with the sensor system, and offers a way to automatically produce far-reaching standard-schema metadata for sensor-generated data, allowing data in distributed archives to be discovered and interpreted. Oak Ridge National Laboratory's SensorNet requirements and the needs of other OGC members are helping drive the SWE specs' maturation, as well as reconcile OGC's Sensor Model Language with the IEEE 1451 "plug-n-play" sensor standard and mature the Sensor Alert Service via employment of the OASIS Common Alert Protocol. Reichardt says these examples illustrate how closer relationships between OGC and other standards bodies are helping the consortium reach its goals.
    Click Here to View Full Article

  • "Voice Over the Future"
    Military Information Technology (04/24/05) Vol. 9, No. 3; Miller, William L.

    The U.S. military's deployment of Voice over Internet Protocol (VoIP) promises to boost the efficiency and effectiveness of military planning and battlefield operations; Cisco's Ed Carney says VoIP's value lies "in creating a common, standards-based, converged network, to enable the right information to go to the right people, not just one time, but all the time." VoIP implementation is the key enablement technology in the Marine Corps' effort to develop unit operations centers (UOC), including the Combat Operations Center set up by the 1st Marine Corps Regiment in the battle of Fallujah. Mike Fallon with General Dynamics C4 Systems says the UOC includes local access network-equipped command centers that each employ a VoIP solution, while Kevin Holt with Marine Corps Systems Command's Marine Air/Ground Task Force Command and Control Centers says VoIP "supports the need to gain more detailed situational awareness with the ability to send voice, data, and video across the intercom system." The proliferation of VoIP in the Army is being fueled by the need to support networking at lower echelons, and Secure Enroute Communications Package-Improved (SECOMP-I) program manager Susan Johnson says VoIP is the program's chief communication enabler for internal/external voice communications. General Dynamics' SECOMP-I system is a mission planning and rehearsal tool for troops in transit or between missions. Permanent force installations are being prepared for their inevitable transition to VoIP, and Combat Information Transform System (CITS) program manager Lt. Col. Michael Horn oversees the effort to install, maintain, and shield voice and data networks at over 100 fixed Air Force bases. Horn says the primary issue in proving the VoIP technology is quality of service, but CITS also expects to streamline its personnel and skill set as well.
    Click Here to View Full Article

  • "Patenting the Internet"
    Information Today (04/05) Vol. 22, No. 4, P. 1; Pike, George H.

    Debate has erupted over the patentability of the Internet and related technologies with the issuance of patents covering media streaming, hyperlinks, and other common online techniques and processes. Patent examiners determine whether the uniqueness of an invention or process is worthy of patenting by studying its novelty (newness) and/or non-obviousness, both of which are ascertained through comparison to "prior art," the publicly available body of knowledge at the time of invention. Novelty is defined as something that was previously unknown to the general marketplace, while a non-obvious innovation is characterized as having a markedly dissimilar form and function from existing or previous sources. The U.S. Patent Office recently awarded WebFeat a patent for its federated search engine technology, which is described as "a method and system for retrieving search results concurrently from multiple disparate databases, whether such databases be available through the Web, or other proprietary internal networks." Process patents such as WebFeat's, which cover a series a steps used to produce a result, have drawn controversy--as has a close relative, business method patents, which encompass unique techniques for carrying out a specific business activity. These two patent categories are primarily responsible for the boom in Internet patent controls. However, many critics and competitors are questioning the viability of such patents, whose uniqueness is harder and harder to determine given the speed of Internet growth, the sophistication of core technologies, and the subtleties of patent law.
    Click Here to View Full Article

  • "Is It Human or Computer? Defending E-Commerce With Captchas"
    IT Professional (04/05) Vol. 17, No. 2, P. 43; Pope, Clark; Kaur, Khushpreet

    Captchas, which are puzzles or problems that humans can easily decipher but that computers cannot, are becoming a key defense for e-commerce systems against spammers and bots. Captcha stands for Completely Automatic Public Turing Test to Tell Computers and Humans Apart, and is modeled after the famous Turing test for distinguishing between men and machines, although the Captcha test is multisensory while the classical Turing test is conversation-based. The most generic type of Captcha consists of an image of seemingly random numbers and letters that are distorted to thwart optical character recognition. Ideally, the Captcha is resistant to brute-force attacks; compatible with completely automated processes for generating and evaluating tests; comprised of public code, data, and algorithms; and reliant on a totally random system of generation founded on the selection of files from an archive of numerous images, names, and other data. Captchas are used by many free email service providers to prevent the creation of accounts by automated scripts sent by spammers, and as a replacement for user accounts and passwords for pseudopublic files. Captchas can also make it difficult for Web spiders to index sites for search engines. Captchas come in several varieties, among them: Gimpy, a puzzle in which words are taken out of a dictionary and displayed in a corrupted image; Pix, which requires users to associate images of mundane objects with a single category or phrase; Baffle Text, which employs small, pseudorandom, pronounceable words; and Sound Captchas, where a random sequence drawn from recordings of simple words or numbers is combined and corrupted by noise and distortion, and must be correctly identified by users. Problems associated with Captchas include inaccessibility for disabled and visually impaired users, major hardware and software requirements, and only fairly difficult workarounds.
    Click Here to View Full Article


 
    [ Archives ]  [ Home ]

 
HOME || ABOUT ACM || MEMBERSHIP || PUBLICATIONS || SPECIAL INTEREST GROUPS (SIGs) || EDUCATION || EVENTS & CONFERENCES || AWARDS || CHAPTERS || COMPUTING & PUBLIC POLICY || PRESSROOM