Association for Computing Machinery
Timely Topics for IT Professionals

About ACM TechNews

ACM TechNews is published every week on Monday, Wednesday, and Friday.


ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either AutoChoice Advisor or ACM. To send comments, please write to technews@hq.acm.org.
Volume 6, Issue 626: Friday, April 2, 2004

  • "Electronic Voting Alternative Offered"
    SiliconValley.com (04/01/04); Ackerman, Elise

    A coalition of volunteer programmers and professors known as the Open Voting Consortium has devised an electronic voting scheme that is "technically sound, accurate, secure, inexpensive, uniform and open." In addition to providing a voter-verified paper ballot that many critics of touch-screen e-voting machines are clamoring for, the system is cheaper than current e-voting systems on the market because it uses open-source software and commodity hardware. A user employs an ordinary PC screen to view a ballot and makes his selections through a touch-screen, mouse, or keyboard interface; the software then transforms the choices into a bar code and prints out a copy of the ballot. The bar code lets the computerized counting system rapidly ascertain election results and enables voters to verify that their selections were recorded correctly by scanning the code into a separate computer. Visually impaired voters can confirm their choices without compromising their privacy by listening to an automated voice that reads back their selections over earphones. Additional security is provided by the storage of electronic backup ballots in XML files, and the placement of paper ballots in privacy folders until they are deposited in a ballot box. Open Voting Consortium founder and software developer Alan Dechert believes the system's use of open-source software will help build public trust in e-voting systems, and satisfy critics who are concerned about e-voting firms' refusal to disclose the code for recording and counting votes. The consortium posts the system's software at SourceForge.net. MIT computer scientist Ted Selker calls the consortium's initiative an important step toward raising public awareness of the need for publicly funded voting system research.
    Click Here to View Full Article

  • "Industry Urges Tech Security Upgrades"
    Associated Press (03/31/04); Bridis, Ted

    Major software companies recommend in an April 1 report that the Homeland Security Department seriously consider whether "tailored government action" is needed to improve software design in order to bolster the security of U.S. computer networks, admitting that the market may lack the ability to adequately secure the most sensitive networks. This position marks a dramatic reversal for the software industry, which has traditionally balked at proposals for making improved network security a federal mandate. The companies, including Microsoft and Computer Associates, urge that the government only impose security improvement mandates if market forces fail, while federal action for securing the most sensitive networks should try to be as non-disruptive as possible to market security innovation. The report calls for the government to furnish a formal study on the issue in fiscal 2005, with the help of the private sector. Recommendations the report makes include investing at least $12 million over the next 19 months to fund 12 new academic fellowships to teach safer software design techniques to future computer engineers; offering incentives to companies that improve buggy software; establishing a government lab to keep tabs on and test the effectiveness of software patches; putting a bounty on hackers and virus authors to encourage their apprehension and conviction; and setting up a cybersecurity report card for vital computer network operators. The industry recommendations were solicited by the cybersecurity division of the Homeland Security Department last December.
    Click Here to View Full Article

  • "Russian Schools Sweep Programming Contest"
    Toronto Globe & Mail (03/31/04); Kapica, Jack

    The St. Petersburg Institute of Fine Mechanics and Optics won the top prize in ACM's 28th International Collegiate Programming Contest, held in Prague, Czech Republic, and Somers, NY. Teams from KTH-Royal Institute of Technology in Stockholm, Belarusian State University in Minsk, and Perm State University in Perm, Russia, all won gold medals. A group from the Massachusetts Institute of Technology was the highest North American finisher, garnering a silver medal for fifth place. The three-person teams competed to complete the most problems in the shortest amount of time during the five-hour event; this year's contest focused on the Eclipse and Linux open source systems. Winners received scholarships, prizes, and the right to claim the "world's smartest trophy." Some 3,150 teams from 75 countries competed in the contest at the regional level, while 73 teams from 31 countries advanced to the World Finals. "This contest features the best and brightest problem solvers from campuses spanning the globe," says Gabby Silberman of IBM Centers for Advanced Studies in Hawthorne, NY. IBM sponsored the competition. Next year's ACM International Collegiate Programming will be held in Shanghai, and in addition to the standard format, the contest will incorporate a challenge using IBM's Power technology.
    Click Here to View Full Article

  • "Mass Resignation for French Scientists"
    Cordis News Service (03/10/04)

    Some 976 French laboratory directors and 1,100 specialist team leaders resigned on March 9 in a demonstration against what they perceive as inertia on the part of the French government to address growing concerns among researchers, including low wages, a lack of funds for new equipment, little coordination across different research organizations, and poor job conditions. The protesters were not satisfied by Minister for Research and New Technologies Claudie Haignere's offer to provide 294 million additional euros for research funding and 300 more jobs, nor were they moved by an appeal from French Prime Minister Jean-Pierre Raffarin. A group of expatriate scientists recently sent an open letter to President Jacques Chirac warning, "Unless the present crisis is transformed into a springboard for activating research in our country, future technological breakthroughs will put us on the wrong side of the divide vis-a-vis Asia and the U.S." Dr. Alex Kahn of Paris' Cochin Institute reported that French-speaking people's first choice for a place to work is Switzerland, Canada, or the United States rather than France. French Ph.D. graduates earn less than 2,000 euros a month on average, while the nearly 11,000 potential researchers the universities churn out annually face limited job opportunities outside of state institutes. Demonstrators say research budgets have barely kept pace with inflation, while an outdated scientific organization with poor industry ties, inflexible bureaucracy, and a civil service status for all personnel lies at the core of many of France's difficulties. An official report about the management of the National Center for Scientific Research concluded that its co-management with trade unions has led to shoddy self-assessment, perpetuating poor or obsolete research projects. The French government has attempted to suggest better-paying project-related jobs for young scientists, while Raffarin wants tax incentives to be established for companies that invest in private research.
    Click Here to View Full Article

  • "Tech Study Provides 'Solid State' Stats"
    Washington Post (03/31/04); Webb, Cynthia L.

    The Milken Institute's 2004 State Technology and Science Index has the same states in the top and bottom of its list, though in slightly different order: The study linked states' technology savvy directly to per-capita income. Massachusetts remains No. 1 in the pound-for-pound ranking of states' use of technology to improve economic development while California edged out Colorado for second place. Colorado and Texas are still suffering from the recent economic downturn and the crash of telecommunications and Internet industries. Home to many high-tech companies and university laboratories, Texas ranked No. 23 compared to a No. 14 standing two years ago, and Milken Institute regional studies director Ross DeVol says the fall was not due to structural problems but rather to the lack of new venture capital and new startup activity, coupled with the IT industry's subdued performance. An article in The San Francisco Chronicle commented on the Milken study and pointed out that the Bay Area would trounce all other competitors if it were its own state, given the presence of Silicon Valley and university research institutions, but that its performance was mitigated by the much larger and less tech-focused Southern California economy. California also suffered in terms of educational investment and university research, precipitated by a serious budget crisis, while DeVol says California is also threatened by its dependence on imported human capital, especially skilled technology talent from countries such as India and China, who have more incentive now to stay home. Former Netscape Communications CEO Jim Barksdale used the Milken study to give a frank assessment to Mississippians recently, saying their standing at the bottom of the rankings was not due to lack of venture capital or economic development, but rather poor basic education in public schools that hurt their image nationally.
    Click Here to View Full Article
    (Access to this site is free; however, first-time visitors must register.)

  • "New EU Grid Project Has International Aspirations"
    IDG News Service (04/01/04); Pruitt, Scarlet

    The goal of the European Union's Enabling Grids for E-Science in Europe (EGEE) project is to construct a dependable and secure grid that is available 24 hours a day, according to the European Organization for Nuclear Research (CERN). EGEE is envisioned as the successor to the European DataGrid (EDG) project and will be built using software developed from EDG, says EGEE project leader Fabrizio Gagliardi, who likens the project to the World Wide Web. EGEE, which was launched on April 1, is being funded by the EU along with 70 partner organizations. The EU has made a two-year commitment of about 32 million euros for EGEE, while total annual funding for the first two years of the project's life is expected to be between 30 million euros and 40 million euros. "This is the EU's attempt to build a distributed computing infrastructure strategy in Europe with major funding from the EU but the dimension is truly international," declares Gagliardi. In addition to European participants such as France's National Center of Scientific Research, EGEE also has the support of the University of Chicago and other major American research organizations. Gagliardi says the initiative is also looking for future support in Southeast Asia and Japan. The project's two-year goal is to build a grid that boasts the processing power of 15,000 to 20,000 accumulated processors to be employed by some 3,000 users.
    Click Here to View Full Article

  • "New Marking Process Traces Spammers, Pirates, and Hackers"
    EurekAlert (03/31/04)

    Penn State researchers have proposed a new process to make it impossible for hackers, spammers, and digital pirates to spoof source addresses in order to thwart attempts to trace them. The method involves using border routers to mark each message or data packet with an identifying number. The marks are formed from the border router's 32-bit IP address and would reside in obsolete fields in the IP packet headers; should the available obsolete field be less than 32 bits long, the researchers suggest partitioning the border router's IP address into overlapping segments, each of which would be employed by the router as a potential mark. Fragments from packets that have been labeled as malevolent are combined to form the names of the border routers that tagged and forwarded them to the victim's computer, while false positives can be reduced because the overlapping fields permit the victim to compare fragments from the same router. The marking scheme generated fewer than 1% false positives per 1,000 attacking addresses in simulated distributed denial of service attacks, and had a 100% success rate in tracing addresses transferring copyrighted content in another simulation. "The technique offers Internet access providers a real-time, cost-effective way to conduct forensics and improve security for the Internet," notes Penn State's Dr. George Kesidis, who developed the process with Ihab Hamadeh. "In addition, the approach will be demonstrably effective during an incremental deployment phase, thereby, creating incentives for broader deployment to satisfy the cyber security concerns of the Internet services industry and government regulators."
    Click Here to View Full Article

  • "Yoran Rejects Claims of Slow Progress in Securing Key IT Systems"
    InformationWeek (03/30/04); Hulme, George V.

    Amit Yoran, director of the Department of Homeland Security's National Cyber Security Division (NCSD), refutes recent claims by Sen. Joseph Lieberman (D-Conn.) on the Senate Government Affairs Committee that the White House's efforts to secure the United States' critical infrastructure IT systems have been sluggish and unfocused. He lists significant accomplishments his division has achieved since its inception last June, among them: The creation of the U.S. Computer Response Team (US-CERT) to oversee participation between federal and non-federal cybersecurity entities, examine and reduce cyberthreats and security holes, issue cyberthreat warnings to affected parties, and coordinate incident-response operations; the establishment of the National Cyber Alert System, which currently disseminates cybersecurity data to 1 million Americans with technical and non-technical backgrounds; and the co-hosting of the National Cyber Security Summit, where both the government and the private sector started working on an architecture for corporate security governance. Another notable achievement was the Homeland Security Department's participation in the Livewire cyberattack simulation, which demonstrated the need to improve the public dissemination of cyberprotection data and two-way information exchange with private companies, and also spurred Yoran's department to form the Cyber Interagency Incident Management Group. The creation of the group, which enables law enforcement, defense, and intelligence officers to leverage federal resources to facilitate the most effective response to intragovernmental cyberthreats, was accompanied by the organization of the Chief Information Security Officers Forum and the Government Forum of Incident Response Teams. Yoran says the U.S. Homeland Security Department is deeply involved in the securing of digital control systems and the development of germane and rational metrics to evaluate how effective its initiatives are.
    Click Here to View Full Article

  • "Conversational Interface Aids Robot Navigation"
    EE Times (03/31/04); Brown, Chappell

    University of Missouri-Columbia computer scientist Marjorie Skubic, together with fuzzy-logic-based pattern recognition expert James Kelley and Pascal Matsakis of the University of Guelph's computing and information science department, has demonstrated a prototype robot capable of navigating a room by following sketches on a personal digital assistant. The device is part of Skubic and her partners' efforts to create robots whose movements or actions can be directed via an intuitive conversational interface that leverages the machines' understanding of spatial relationships. "It turns out that both maps and everyday conversations share a simple set of spatial elements and relationships that are used to navigate around obstacles," notes Skubic, whose team is working on an AI system that can comprehend those fundamental terms, enabling operators to control their movements through the conversational interface. In one scenario, an operator might tell the robot that a pillar stands in front of a doorway and then instruct it to move around the pillar: The robot would parse the operator's sentences to ascertain spatial relationships, then employ its sensors to identify the pillar and circumnavigate it, obviating the need for a cognitive processor to assess the entire surrounding environment. The elemental spatial terminology is based on building histograms that symbolize the distance relationships between objects, and a fuzzy logic system is used to process the histograms so that objects and their relationships can be identified and mapped onto conversational phrases. Systems and environments that Skubic thinks could benefit from the conversational scheme include homes, offices, and video surveillance, while Skubic is working on a system that will enable robots designed by her students to function as a group.
    Click Here to View Full Article

  • "'Reality Mining' the Organization"
    Technology Review (03/31/04); Pentland, Alex

    When taken by itself, data mining of email, Web pages, and other digital media is not an effective method for determining a company's de facto organization. Instead, MIT Media Lab's Human Design Research Group is focusing on "reality mining," in which widely available, wearable gadgets can be used to define and chart real-world, face-to-face employee interactions, writes Alex Pentland, the research group's founder and director. Reality-mined data allow people to be clustered according to profiles yielded from accumulated conversation, email, locations, and Web data, permitting the presence or absence of collaboration to be recognized. The MIT Media Lab group can construct computational models that simulate how existing social networks are affected by organizational disturbances--the integration of several departments, for example--using machine-learning advancements. Pentland notes that his group is seeking to improve organizational performance with several reality mining approaches: In one strategy, knowledge management is enhanced with wearable sensors that evaluate employees' tone of voice, prosody, or other factors so managers can ascertain relationships between colleagues; this in turn can support a worker profile database that is affected by changes in email, oral-conversation behavior, and content which can help managers better understand the origins of in-house expertise. Another strategy involves building employee profiles based on their conversational vocabulary via speech recognition technology. Querying these profiles would enable managers to organize workers into efficient, balanced teams. Pentland thinks reality mining can improve management of complex organizations, but he acknowledges that the protection of privacy and transparency must be supported.
    Click Here to View Full Article

  • "Snapshot Chat Creates Automatic Captions"
    New Scientist (03/31/04); Anathaswamy, Anil

    Hewlett-Packard researcher Margaret Fleck has devised a system in which a computer can caption and index digital photos by recording commentary and conversations to identify keywords that describe the pictures. She has developed software that imprints these conversations onto a hard disk, employs a speech-recognition program to convert the audio into text in real time, and then deduces appropriate keywords for captioning and indexing the photos, such as place names, events, and when the pictures were taken. Fleck's prototype runs on a microphone-outfitted PC that automatically starts recording when a digital photo album is opened, and stops after 30 seconds of silence. The HP researcher wants to build a system that can record "open-air" conversations between people discussing the photos, and the system should be able produce lengthy descriptive captions as the accuracy of speech-recognition software improves. "It's a really clever way of annotating pictures," says Stanford University's Mor Naaman, who is developing a system for annotating digital photos by location using digital cameras equipped with global positioning system devices. Fleck thinks an effective technique will combine several unique approaches, and notes that researchers at the University of California in Berkeley have created software that indexes pictures by extracting key elements within the photos.
    Click Here to View Full Article

  • "GNOME 2.6 Out to Up Linux Desktop Stakes"
    InternetNews.com (03/31/04); Kerner, Sean Michael

    The GNOME Linux desktop version 2.6 is an evolutionary improvement on its predecessor and is equal to the Mac OS X in terms of usability, according to Ximian co-founder and Novell product technology vice president Miguel de Icaza. GNU Network Object Model Environment (GNOME) and KDE are the two main Linux desktop efforts, neither of which has marked advantages over the other although they have more recently diverged in terms of features and structure. GNOME is similar to Windows and runs on Unix and related systems, and version 2.6 introduces "spatial browsing" like the finder, or query, feature available on the Mac OS X; spatial browsing layers windows on top of one another and maps directories to specific windows so users can tunnel into database information. GNOME's FileRoller file-compression application is improved as is the integrated PDF viewer and help application. De Icaza says a simpler interface is the basic design focus for GNOME, but he no longer has any hopes for the Linux desktop in the consumer market since there is so much more software that only works on Windows. GNOME has taken off with governments, however, including a 400,000-user deployment in Spain; it will take about five years for Linux to make significant headway in the consumer space, estimates de Icaza. Meanwhile, open source desktop applications such as the Open Office productivity suite and Mozilla browser are having tremendous success as cross-platform applications. Red Monk analyst Stephen O'Grady notes that Novell also owns SuSe, which has chosen KDE as its preferred Linux desktop, though GNOME is available on SuSe Linux as an option. Novell has not shown interest in dropping one Linux desktop in favor of the other yet, and O'Grady says that as long as both systems steadily continue to improve, they will gain more and more ground in both the corporate and consumer markets.
    Click Here to View Full Article

  • "Mathematicians, Computer Scientists Play Key Role in Analysis of Lab Rat Genome"
    UC Berkeley News (03/31/04); Sanders, Robert

    University of California, Berkeley, computer scientists and mathematicians have developed important computational models and analysis software for comparing the genomes of rats, mice, and humans, the only three vertebrates to have their entire genetic code documented. Besides providing important insight into the relations between the three species, including common genetic defenses against disease, the new data will aid drug researchers in their use of rat models to create human drugs. A computer program called MAVID was created by UC Berkeley Professor Lior Pachter and graduate students to compare the genomes; MAVID was previously used to compare the mouse and human genomes, but Pachter says the addition of the brown Norway rat genome would yield important new insights not easily perceivable when comparing just two genomes. The brown Norway rat is believed to have originated in central Asia and spread throughout the world with the human migration, providing researchers valuable insights into common genetic responses to disease. Other computer tools used to examine the genomes include a visualization tool called K-BROWSER and a technique for searching for evolutionary hot spots in the sequences, or areas of genetic code thought to have changed rapidly through evolution. Computational software called SLAM was also developed to predict novel human genes by analyzing mouse and rat genomes. Pachter says numerous other vertebrate animal genomes will be completed this year, including the chicken, chimpanzee, frog, dog, and three types of fish; those genomes will similarly be compared to the human genome and other animals' genomes. Each animal will provide different insight into human health and biology, especially the vastly different chicken genome and the similar chimpanzee genome, which will highlight unique human gene characteristics.
    Click Here to View Full Article

  • "All the World's a Soundstage as Audio Formats Evolve"
    New York Times (04/01/04) P. E5; Captain, Sean

    Surround sound is coming to MP3 audio compression, bringing the popular format alongside new proprietary technology from Microsoft and RealNetworks: The Fraunhofer Institute says MP3 Surround will play on normal stereo devices such as portable music players, but will enable 5.1 channels similar to Dolby Digital 5.1 soundtracks shipped with most DVDs; the new compressed format does not take much more resources than the old MP3. Experts see surround sound on the Web, on downloaded movies, and in other applications. Compressed digital music has long languished in the dual-channel stereo mode, which is a throwback to the vinyl records of the 1950s. MP3 has been the favorite of file-sharers and home users, but not of the music industry; none of the online music services use the MP3 format because it does not come with digital rights management capabilities, though those can be added. However, MP3 Surround is unlikely to bring surround sound to file-sharing networks because the Super Audio and DVD Audio discs that have surround sound use technical safeguards to prevent users from making digital copies. Jurgen Herre of the Fraunhofer Institute says the research center has approached recording firms about using MP3 Surround for legal downloading services--but even if MP3 Surround does gain wide acceptance, portable music players are still largely limited by headphones with two speakers. Dolby Laboratories' new Dolby Headphone can create the effect of multichannel sound with just two speakers and will be used to bring simulated surround sound to DVD-playing laptops. Creative Labs' Phil O'Shaughnessy says many people still like stereo sound when listening to recorded music, but almost everyone prefers surround sound when watching movies.
    Click Here to View Full Article
    (Access to this site is free; however, first-time visitors must register.)

  • "Seeing-Eye Computer Guides Blind"
    Wired News (03/30/04); Knapp, Louise

    Visually handicapped people could be guided by iCare, a portable, computerized "seeing" assistant developed by researchers at Arizona State and Wright State universities. ICare's hardware components include a pair of glasses outfitted with cameras, which record images that are converted to verbal messages by a laptop the user carries in a backpack; the user wears a headset to receive the messages and a microphone to make queries. Wright State's Nikolaos Bourbakis says the images captured by the camera are processed by computer algorithms that extract data to give the user information about the object or person he or she is facing. ICare's greatest strength up to now is its ability to convert text into a synthesized voice via optical character recognition software and other tools that can account for variant illumination and viewing angles; "It's as fast as a sighted person could read a book," boasts blind ASU computer science student David Paul. The iCare-Reader can translate menus and labels as well as books, but its ability to translate handwritten text needs to be improved. Web site navigation is another iCare application, although its usability is limited if users cannot access the pages they want because they cannot see the mouse cursor, notes ASU disability research specialist Terri Hedgpeth; to address this problem, the ASU researchers devised a software interface that deploys keyboard shortcuts that close the distance between the screen-reader software and Blackboard software. Bourbakis says the iCare-Human Recognizer module can identify people by comparing their eye color, hair color, and facial features to those of individuals stored in its database, although this ability is only accessible under specific lighting conditions and at a particular viewing angle. Finally, the iCare-Scene Analyzer examines the user's surroundings and can help the user navigate by providing data about key objects--exits, impediments, and so on--that are identified through database comparisons.
    Click Here to View Full Article

  • "Stop the Presses! Roll Out the E-Papers"
    Financial Times-IT Review (03/31/04) P. 1; Perkin, Julian

    A major breakthrough in the field of electronic paper was announced last week with Philips Electronics' declaration that it will provide the world's first commercial e-paper display module to Sony for use in an "e-book" device that will become available in Japan in late April; the display's resolution will be comparable to newsprint, and the device will boast small, long-lived batteries. E-paper's promised advantages over traditional newspapers include instant electronic updates that eliminate the need to print copies and transport them to retailers and subscribers, while advancements in the technology such as screens with no need for backlighting and rollup displays will boost its viability for consumers. Also in e-paper's favor is the roster of major companies pursuing practical e-paper applications, which includes Philips, IBM, Siemens, and Fujitsu. E-paper is not only lightweight and portable, but is easy to read and can be read in natural reflected light. Furthermore, the product only needs to consume power when it updates its content. Different e-paper technologies can serve different functions: Low-power, rollable, large-format screens for newspapers, rigid and more dynamic screens for mobile phones and other consumer electronic products, and detachable rollable screens that can be plugged into small portable devices for computing on the go. It is unlikely that e-paper will eliminate printed newspapers altogether, as newspapers still have certain advantages--for instance, newspapers are discardable, whereas a consumer would have to keep their e-paper device on their person all day. Still, experts believe the advent of commercial e-paper products will lead to a sea change in the newspaper publishing industry: Publishers may have to accelerate their migration toward new models of more distributed production and distribution, and may also face increased competition.

  • "Face-Off: Is Patch Management the Best Defense Against Vulnerabilities?"
    Network World (03/29/04) Vol. 21, No. 13, P. 44; Schultze, Eric; Hofmeyr, Steven

    Shavlik Technologies chief security architect Eric Schultze contends that intrusion-prevention systems (IPSes), anti-virus software, and firewalls alone cannot shield computers against known software flaws, and that patch management is the key ingredient for ensuring network security. Schultze likens a software patch to medicine in that it attacks the disease--the flaw itself--rather than the symptoms. He explains that it is not always known that a patch for one bug could also remedy another error elsewhere in the operating system, which is why applying a firewall or an IPS to fix one specific bug may not protect other susceptible portions of the code; Schultze argues that the operating system or application vendor is optimally positioned to fix the flaw because it truly understands the nature and breadth of the error. He adds that patches not only contain the latest version of the buggy code, but often also contain all known security fixes, so applying a patch guarantees that the user is running the latest iteration of the vendor code, correcting public and non-public vulnerabilities in the associated code. Sana Security founder Steven Hofmeyr calls patch management a miserable failure: He explains that faulty patches can carry more organizational cost than a security breach by bringing down vital servers, and cautions that vendors must conduct thorough regression testing before deployment. Hofmeyr also points out that misconfiguration and other certain vulnerabilities cannot be remedied by patching, while vendors sometimes fail to develop a patch because they lack the time and resources, or ascribe no importance to a bug. In addition, hackers are adding new tools to their arsenal to accelerate the reverse-engineering of patches to determine flaws, speeding up the race between hacker exploitation and patch deployment. Hofmeyr believes host-based IPSes are a more effective solution, because they block attacks against unpatched flaws and furnish immediate protection.
    Click Here to View Full Article
    (Access to this article is available to paid subscribers only.)

  • "Public ENUM--How Important Is It?"
    Business Communications Review (03/04) Vol. 34, No. 3, P. 26; Borthick, Sandra L.

    The FCC's deliberations on whether to regulate public Voice over IP (VOIP) mask another policy issue of how involved the U.S. should get in ENUM, the electronic number mapping between Public Switched Telephone Network (PSTN) and IP addresses. Thus far, the federal government has removed itself from addressing issues about how "calls" will be facilitated among the endpoints served by different IP voice and PSTN service providers, yet the FCC and the departments of Commerce and State demand that any ENUM deployments comply with eight "principles:" The preservation of national sovereignty; minimal regulation; protection of users' privacy; assurance of the stability and security of the PSTN and the Internet; support of competition; promotion of innovation; support of interoperability; and the retention of opportunity for alternate implementations. Because there is no direct incentive to adopt ENUM in the U.S., VOIP and PSTN service providers have elected to circumvent it, but this could backfire if private databases and mapping workarounds become so numerous that customers become uncertain as to whether their calls are reaching their intended numbers or addresses. Rather than assuming direct responsibility for ENUM, government regulators want a distant contract with an administrative body, and they asked the ENUM Forum to devise specifications for the "components" the contractor would deliver. The Network Reliability and Interoperability Council advises that the U.S. officially opt in to ENUM, and that alternative providers synchronize their databases with "public ENUM and the LNP database, as well as with other applicable Number Portability databases in the PSTN." VeriSign VP Bob Wienski thinks that public ENUM deployments must be prefaced by "private ENUM" initiatives in which carriers and service providers share telephone numbers and IP addresses, while Neustar's Steve Granek believes the spread of IP voice endpoints will spark a gradual transition toward public ENUM. Zultys Technologies' Patrick Ferriter contends that the urge to deploy ENUM is more likely to originate from overseas, where PSTN long distance rates are higher. Public ENUM is not expected to emerge until 2006 at least, according to one optimistic forecast.
    Click Here to View Full Article