Association for Computing Machinery
Timely Topics for IT Professionals

About ACM TechNews

ACM TechNews is published every week on Monday, Wednesday, and Friday.


ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either Thunderstone or ACM. To send comments, please write to technews@hq.acm.org.
Volume 7, Issue 811: Friday, July 1, 2005

  • "U.S. Won't Cede Control of Net Computers"
    Associated Press (06/30/05); Jesdanun, Anick

    The U.S. Commerce Department, in what some view as a policy reversal, says that it will retain oversight indefinitely of the 13 "root" servers that contain government-approved lists of the about 260 domain suffixes and tell Web browsers and email programs how to move Internet traffic. In 1998, the department chose ICANN to decide what domain endings go on the list of approved suffixes, keeping veto power for itself. Commerce indicated it would cede control once ICANN met certain criteria. In an online four-paragraph posting of its decision, Commerce indicated that security threats and increased reliance on the Web for communications and commerce were behind its decision. "The signals and words and intentions and policies need to be clear so all of us benefiting in the world from the Internet and in the U.S. economy can have confidence there will be continued stewardship," says Commerce assistant secretary for communications and information Michael Gallagher. The U.S. has historically held the role of overseer of the root servers because it funded much of the early development of the Internet. Gallagher notes that Commerce would gladly see foreign governments take control of their own country-code suffixes. But some foreign countries are calling on the U.S. to relinquish its role and hand oversight of the servers over to an international organization such as the U.N. International Telecommunication Union. Some countries may choose to withdraw support for ICANN due to Commerce's announcement. The worst case scenario would have countries establishing their own separate Domain Name System, meaning two users keying the same domain name could reach different Web sites, depending on their location. The U.N. World Summit on the Information Society scheduled for November in Tunisia will likely focus on the issue.
    Click Here to View Full Article

  • "Hewlett Cites Progress on Quantum Computer"
    New York Times (07/01/05) P. C6; Markoff, John

    Hewlett-Packard scientists Bill Munro and Tim Spiller have devised a new strategy for developing the quantum computer. Quantum computing, a technology with debatable potential, departs from the today's transistor-based electronics and codes information in "qubits," units that subatomic principles dictate can stand for 1s and 0s simultaneously. Proponents of the technology cite the expansive power that this flexibility offers, particularly when qubit machines are linked. Hewlett-Packard will receive a grant of up to $10 million from the Defense Advanced Research Projects Agency to support their research, and will provide $7.5 million of its own money. The theory researchers are pursuing involves laser pulses inducing interaction among photons carrying quantum data. While intrigued, some scientists are still skeptical. In response to Munro and Spiller's paper, co-director of the Berkeley Quantum Information Center Umesh Vazirani cautions that while their findings are interesting, "optical quantum computing schemes are not regarded as the most practical." The leading technology in the field of quantum computing involves using trapped ions. Through its grant, Hewlett-Packard will also explore potential applications for quantum computing, particularly in the field of computer security.
    Click Here to View Full Article
    (Articles published within 7 days can be accessed free of charge on this site. After 7 days, a pay-per-article option is available. First-time visitors will need to register.)

  • "Summer Science Researchers Developing System to Synchronize Data on Cell Phone"
    Hamilton College (06/30/05); Lemanczyk, Emily

    Three students at Hamilton College are working with professor Mark Bailey on a project dubbed "Data Synchronization Between Workstations via Bluetooth-Enabled Cell Phones" that aims to develop synchronization technology for cell phones. Aram Kudurshian, Mike Gruen, and Erik Goulding are attempting to create a method for Bluetooth-enabled phones to automatically update a file from one workstation to another. Their research, funded by grants from McLouth and the National Science Foundation, aims to improve data portability for office workers who might access the same file both at home and at work. Their work is based on an earlier project that coalesced 40 ACM research papers. Goulding cites the principal obstacles the team has encountered as, "wrestling with poorly-written code designed by other people, trying to make it run on hardware it was never designed to work on, and expanding it to do things its creators never meant it to do."
    Click Here to View Full Article

  • "Female Interest in Technology Fields Crucial, Say Universities and Corporations"
    Kansas City infoZine (06/29/05); Nester, Janet

    Amid an overall decline in students' interest in IT and engineering, women are especially looking elsewhere for their careers. This falloff in interest is ill-timed, as the Department of Labor estimates that IT jobs will be among the fastest growing through 2012. IT has declined in stature since the dot-com bust, and for women, "there are no role models" anymore, said Microsoft's Revi Sterling, a fact that discourages women, who frequently are attracted to careers that make a difference. Sterling spoke at a congressional hearing on Tuesday examining high school education. In attempting to foster female interest in IT, the National Center for Women and Information Technology has partnered with the Girl Scouts of America to help overcome the stigma that IT is the province of men. Corporations and universities are joining the effort to showcase women's importance to the IT industry. Other initiatives have sought grant funding, such as the North Carolina New School Project, which seeks to establish between 40 and 50 schools with revamped curriculum standards. As women constituted 56 percent of all college students in 2003, reaching them is critical for the future of the IT industry. "Women won't say or think a field is attractive not because they think they can't do it but because they are not encouraged to go into higher level math or science," said Syracuse's Kathleen Joyce, undergraduate recruitment director for the College of Engineering and Computer Science at Syracuse University.
    Click Here to View Full Article

    For information regarding ACM's Committee on Women and Computing, visit http://www.acm.org/women.

  • "Net Pioneer Wants New Internet"
    Wired News (06/29/05); Baard, Mark

    David Clark, one of the chief architect of the Internet, has enlisted the National Science Foundation to help develop a "clean slate" Internet framework. The new network, which could be tested on the National LambdaRail, aims to entirely re-conceive of the mode through which Internet users around the world are linked. Clark believes the current architecture is too old to permit new innovation, while the next generation would offer greater security and smoother commerce at much faster speeds. The current Internet's limitations dim the prospects of exciting new applications, such as tele-immersion, where 3-D stages are created for users to interact with each other virtually over the network. "Systems rigidify over time," Clark said of the difficulty of trying to adapt the current Internet. "Each of those incremental changes has interactions with the others. And each is harder to add than the last one. After a while, the effort-to-success ratio" becomes untenable. While some believe the current Internet can be salvaged through gradual change, Clark says the problem exists on an architectural level. Clark will begin using his $200,000 grant to hold a workshop this summer for security experts and network architects. His two principal concerns are to enhance security and to solidify the economic position of service providers. Clark emphasizes the imperative of dealing with security concerns on a fundamental level, rather than the software patches that he sees as ineffectual Band-aids. "Look at phishing and spam, and zombies. Show me how six incremental changes are going to make them go away."
    Click Here to View Full Article

  • "Antispam Proposals Advance"
    CNet (06/29/05); Festa, Paul

    The Internet Engineering Steering Group (IESG) announced that it has adopted two competing antispam technologies, citing both as still "experimental." Microsoft, AOL, and others have been competing for control of the antispam market, which now appears to be divided between the Sender Policy Framework (SPF) and Sender ID. Microsoft backs Sender ID, which it sees as a more sophisticated version of SPF. Microsoft's Samantha McManus says, "We're glad to see Sender ID's experimental status, and we think email authentication is very important for addressing spam and phishing. That said, we definitely have more to do." Both technologies have been accepted by email providers, though the IESG, a division of the Internet Engineering Task Force (IETF), believes the experimental trial is necessary to solidify standards. As an alternative, Cisco backs Yahoo's DomainKeys as its authentication and antispam application. The IESG said, "Given the importance of the worldwide email and DNS systems, it is critical that future standards support their continued stability and smooth operation."
    Click Here to View Full Article

  • "Stargazing, Internet-Style"
    Government Technology (06/27/05); Taylor, Paul W.

    The landscape of the Internet will change over the next 10 years, the only question is how. A recent report titled "The Future of the Internet" announced the findings of a Pew Internet & American Life Project and Elon University survey of 1,300 technology leaders, scholars, and industry members that tried to answer that question. A majority of respondents believe the Internet will blur the boundaries of education, work, and family life, offer students more choices, and alter significantly the availability and format of entertainment. Many also anticipate greater security issues and express concern for the network's safety as government, health care, and other sensitive arenas move inevitably toward the Internet. In fact, two-thirds of respondents said they expect the Internet will suffer at least one major attack in the next 10 years that will devastate the network's critical infrastructure. In light of the increased use of the Internet as an advocacy tool for political and religious extremism, more comprehensive monitoring was also cited as a necessary development. Due to public security concerns, participants believe online voting will be slow to catch on, with just one in three respondents agreeing that half of all votes will be cast online in 10 years. However, 42 percent feel the Internet will "substantially increase civic engagement in the next 10 years." The survey also warned of the Internet's potential to alienate the public that it is intended to serve, predicting that "institutions that have forgotten how to sound human" will die on the vine.
    Click Here to View Full Article

  • "Car System Lets Voice Drive the Web"
    Discovery Channel (06/30/05); Staedter, Tracy

    Faculty member Meirav Taieb-Maimon of Ben-Gurion University in Israel has designed a voice-activated search engine that could be used by drivers to navigate the Web. The system consists of two Microsoft off-the-shelf speech recognition components and custom software called Maestro that directs the movement of speech to text, text to search, and results to speech. If, for example, a drive wishes to find a restaurant in New York City, he would start by saying "Restaurants in New York City." The speech is converted to text and delivered to a "query builder" that puts the search into terms a search engine such as Google can comprehend. The query builder returns the search to Maestro, which sends it on to the search engine. Results are sent by Maestro to a speech component for the driver to hear. Results are organized into menu choices, such as price, location, and reviews, for the driver to choose from until a desired Web site is found. Taieb-Maimon is preparing a study to determine the amount of driver distraction while using the system to gauge the feasibility of installing it in automobiles.
    Click Here to View Full Article

  • "CMU Puts Words in Ben Franklin's Mouth"
    Pittsburgh Post-Gazette (06/30/05); Spice, Byron

    Carnegie-Mellon's Synthetic Interview technology powers a new exhibit in Philadelphia's Lights of Liberty Show where patrons can ask Ben Franklin questions, either from a list of 160 that are pre-prepared, or by typing their own using a list of keywords. Software searches the computer's database of 800 answers and replies with the most appropriate. A projection of Franklin, which is actually actor Ralph Archbald, hovers ghostlike in the air and replies. CMU's Entertainment Technology Center, which developed the Synthetic Interview, says the technology could pave the way for future projects that seek to combine tourism and education. Medrespond, a health information company, has applied the technology to produce medical advice on the Web. The Franklin exhibit involved recording more than 100 hours of Archbald portraying Franklin answering all manner of questions, which are then projected onto an angled pane of glass, creating the illusion of a floating specter, or the "Pepper's Ghost" Illusion. In the event that someone types in a question that is off the wall or nonsensical, the exhibit has ample material to provide some type of answer. Lights of Liberty President Ann Meredith said, "It's the closest people will come to being able to speak to Benjamin Franklin." Synthetic Interview was invented in 1999 at Carnegie Mellon's Human-Computer Interaction Institute by senior systems scientist Scott Stevens and Michael Christel, a senior systems scientist in the School of Computer Science.
    Click Here to View Full Article

  • "The $100 Computer Is Key to India's Tech Fortunes"
    CNet (06/29/05); Kanellos, Michael

    India is winning the race to reach the 5 billion people who still do not use the Internet. The critical ingredient is cost, which the Indian company Novatium will emphasize as it unveils a basic home computer available for around $70. The price doubles to include a monitor, though Indian companies are addressing that by offering secondhand monitors. Low-cost technology has become a driving engine for India's technology economy. A researcher at the Indian Institute of Technology has developed a $1,000 automatic teller machine that doubles as an Internet kiosk for villages. In an effort to cut energy consumption, Jitendra Shah of the Centre for the Development of Advanced Computing is studying ways to establish solar-powered computers linked with battery-powered PCs that work as servers. Computer prices remain a function of their raw materials which, though metal and plastic have become much cheaper, still makes it untenable to offer a fully functioning PC for under $200, some say. The monitor and hard drive are the most intransigent in terms of price. Intel seeks to defray the cost of a computer by spreading the price throughout a village in a cooperative program. India is home to several low-cost initiatives, such as Xenitis' Linux-powered $250 model, and Via's Terra PC, and some see its market potentially embracing the otherwise unpopular thin client, which communicates through a server that stores all of its data and performs it calculations. PCs with limited capabilities could make inroads in India's economy, as a survey of bank employees revealed that 95 percent rely on their computer to perform only one function. Still, Novatium founder Rajesh Jain cautions: "Just because we are an emerging market doesn't mean we want an inferior product."
    Click Here to View Full Article

  • "Keeping an Eye on Domestic Appliances"
    Financial Times-IT Review (06/29/05) P. 7; King, Ben

    Innovators have long sought areas in the home that technology could improve. With advances in microprocessors and connectivity, companies such as Control4 are making those visions a reality. Control4 and the South Korean telecom outfit SK Telecom have been using ZigBee technology to facilitate wireless control of automated household items, such as a television or an iron. Ember CEO Jeff Gramer says lower prices and ZigBee's "mesh radio" system's flexibility, which automatically reconfigures the network when a device is added or removed, has boosted its popularity. Gramer says, "The reason that it is really starting to take off in the home is that the cost of a ZigBee solution is under $5." However, competition has come from abroad, as the Danish company Zensys has developed a rival technology called Z-Wave, which the company offers for roughly half of ZigBee's price. Each company plans to ship 1 million units this year. Aside from internal competition in the market, promoters of household automation will have to demonstrate the relevance of their technology. Analysts have identified security, health care, and energy conservation as the three most likely avenues to mainstream wireless household automation. Non-invasive monitoring of the elderly and infirmed could enable them to live independently for longer, and alerts of wasted energy would clearly save consumers money on their utility bills. Despite lingering concerns over the ease with which the automated household could be managed and whether existing nodes are strong enough for certain complex security applications, developers are aiming for the sky. "If every household did have 100 or so of these devices, then you get up to the billions very quickly," says Gartner's Nick Jones.
    Click Here to View Full Article

  • "Why Linux Needs Rexx"
    NewsForge (06/28/05); Fosdick, Howard

    Some Linux proponents see in Rexx the potential to give Linux the boost it needs to overtake the Windows desktop, writes Howard Fosdick, author of "Rexx Programmers Reference." IBM invented the scripting language many years ago, and its advocates cite numerous advantages, including free access, portability, standardization, and a widely established community of users. Rexx enjoys the preeminent position in mainframes, and is still widely remembered for having powered notable early desktops such as the Amiga OS and OS/2. Rexx also carries the important benefit over Perl of being an easy language to learn, while it is still powerful enough to be relevant in today's climate. Unlike Python, Rexx appeals to the casual user who might not naturally approach a problem on an object-oriented basis. Tcl/Tk falls short on the compatibility front, and it lacks Rexx's proven history. Two varieties of Rexx are available: "classic" Rexx and Open Object Rexx, which is completely object-oriented, offering programmers a choice based on their needs and level of comfort with the language. For Linux to compete, it needs a language such as Rexx that reaches out to a broad spectrum of users, rather than one confined to those with a high degree of technical expertise.
    Click Here to View Full Article

  • "USC Voice-to-Voice Translation Machine Perfects Bedside Manner"
    USC Viterbi School of Engineering (06/28/05); Mankin, Eric

    Researchers at the USC Information Sciences Institute unveiled Transonics Spoken Dialog Translator, a natural language based spoken word translation device, at the recent Association for Computational Linguistics conference. Transonics, developed by a multi-disciplinary USC team of scientists and engineers, translates speech between English and Persian. The technology, though still constrained by many grammatical errors, is expected to have its greatest impact in the medical field, as developers anticipate Transonics breaking down language barriers in ambulances and emergency rooms. Recognizing spoken words can be challenging for a machine, as in translation it has to interpret accents and overcome background noise. Transonics' lead developer, Srikanth Narayan, an associate professor of electrical engineering, computer science and linguistics at the USC Viterbi School of Engineering and director of the Speech Analysis and Interpretation Laboratory (SAIL), says, "Fluent two-way machine voice translation is one of the holy grails of engineering." The Linux-powered system links a doctor and patient through a laptop, headphones, and a keypad. After a doctor asks a question, the computer's best guesses appear for the doctor to choose from, and the patient then hears the question through the headphones. While he is mindful of the system's flaws, Narayanan is optimistic about Transonics' future. "We are years away from perfecting it, but we think the choices we have made about how to go about creating such a system are working. We hope to have something that will be useful in emergency rooms or ambulances within two years or so."
    Click Here to View Full Article

  • "Teacher's Little Helpers"
    University of California, San Diego (06/22/05); Kiderra, Inga

    Researchers at the University of California, San Diego, in partnership with Sony Intelligence Dynamics Laboratories, are exploring the educational value of social robots through the Robot Using Bayesian Inference (RUBI) Project. RUBI, modeled after Sony's QRIO robot, runs on four non-motorized wheels. It has two arms and a head. Two cameras make its eyes, with a third attached to the back for peripheral vision. The five CPUs that constitute its body are linked to another 24 at the lab that control RUBI's experiments. RUBI has a touch screen on its stomach that enables interaction with children as the robot teaches them songs and games. "Our team is working on understanding what it takes to have a natural interaction between robots and humans," said Javier Movellan, director of the Machine Perception Lab that is conducting the RUBI Project. Movellan believes the hardest problem roboticists face is how to simulate the most basic human interactions to create relationships, such as a well-timed smile or interpreting another's emotions. RUBI and QRIO are put to work daily in a classroom where their interactions with children reveal areas in which the robots need to improve. Currently RUBI, who serves as an assistant teacher, is enjoying greater popularity with the children than QRIO, who functions as their peer. "The next step is to improve the interactivity of the dance and figure out how to re-attract the attention of the children," said Sony's Fumihide Tanaka. Findings from the first phase of the project will be presented at the IEEE International Conference on Development and Learning in Osaka, Japan next month.
    Click Here to View Full Article

  • "Introducing SKOS"
    XML.com (06/22/05); Mikhalenko, Peter

    The W3C recently introduced the Simple Knowledge Organization System (SKOS), a method by which machines can understand knowledge organizations such as thesauri, terminologies, and glossaries. The SKOS Core Vocabulary, an RDF application, can join with similar data through Semantic Web applications. Thanks to XML's schematic definitions, the Semantic Web has the ability to interpret information. Its next great advance will likely come from Web Ontology Language (OWL), which uses Description Logic to formally classify the semantics of Web documents. OWL can also be helpful in generating metadata, though its use demands a time-consuming level of human specialization that can be costly. SKOS has the capability of searching by abbreviation or acronym, and empowers symbolic labeling, wherein a concept is labeled with an image. SKOS also provides for the referencing of a document through its URI. Further, SKOS permits the classification of data according to its scope, as it can interpret the relationship between words along a scale that understands the relationship between narrow and broad. Among the issues still facing SKOS are improving relations with RDFS and OWL ontologies, its mapping concept, and concept scheme versioning, wherein multiple URIs can be used among separate namespaces that deal with differing vocabularies.
    Click Here to View Full Article

  • "E-Government Run Amok!"
    Government Computer News (06/27/05) Vol. 24, No. 16; Jackson, Joab

    The National Science Foundation is studying ways in which computer information sciences can enhance government participation through its Digital Government Research Program. The NSF unveiled its research and technologies earlier in the year in Atlanta during the Digital Government Research conference. Stuart Shulman, an assistant professor of information sciences and public administration at the University of Pittsburgh, used an NSF grant to study the 540,000 email comments the Environmental Protection Agency received in the first half of 2004 regarding a proposed rule, and of the 1,000 messages he sampled, only 174 had original material that was never altered, such as by an advocacy group. Meanwhile, software designed to catch form, or near duplicate, emails was on display from Carnegie Mellon University professor Jamie Callan and graduate student Hui Yang. Other research includes software developed by Stanford University researchers that can sort and categorize email by the provision in a bill each message relates. Stanford researcher Gloria Lau says the program could help agency workers analyze public feedback by quickly determining the controversial aspects of a proposed bill or policy. The NSF initially saw the Internet as a one-way tool in which Web sites would serve as electronic brochures, but now acknowledges its emergence a conduit that citizens can use to interact with government. The challenge is to determine how to deal with the vast amount of emails governments receive, and whether it is a good thing. Meanwhile, an NSF-back study by researchers at the University of Pennsylvania is evaluating whether online forums can enable private citizens to become informed and active participants in policy formation. Although reluctant at first, through group interaction project volunteers grew more confident about their ability analyze issues.
    Click Here to View Full Article

  • "Group Rethink"
    Technology Review (06/05) Vol. 108, No. 6, P. 80; Fitzgerald, Michael

    New York writer James Surowiecki argues in "The Wisdom of Crowds" that the collective intelligence of large groups can outsmart the most knowledgeable experts. Surowiecki cites the stock market's reaction to booster rocket manufacturer Morton Thiokol months before the federal governments went public with who was largely at fault for the Challenger shuttle disaster as an example, and adds that such a crowd needs diversity of opinion, independence of opinion, decentralization, and a mechanism for aggregating opinions to reach a decision. Groups can serve as parallel-processing decision engines for solving problems in public policy and other areas, and although Surowiecki does not offer a reliable way for building those decision engines, technologists, entrepreneurs, and venture capitalists have. Communities are responsible for open-source software, a movement that has spurred more interest in technology that could lead to greater connectedness; and Google and eBay are thriving off of the collective behavior of large groups of people. Online social networks, Web logs, and wikis are connecting technologies that can yield a high group IQ. Thomas W. Malone of MIT's Sloan School of Management, and author of "The Future of Work," maintains that the future of industry is flat management ranks, which companies such as Google achieve by using internal blogs. Meanwhile, the success and failure of Howard Dean's use of the Internet has been explored in "The Revolution Will Not Be Televised," by his former campaign manager Joe Trippi.
    Click Here to View Full Article

  • "Are We There Yet?"
    Software Development (06/05) Vol. 13, No. 6, P. 26; Wiegers, Karl

    "Software Requirements" author Karl Wiegers recommends that software organizations develop SMART (specific, measurable, attainable, relevant, and trackable) product release criteria to help ensure that products will not come up short when they are rolled out. It must be determined what can be measured to rate how close developers are to satisfying each criterion. Project stakeholders responsible for each release decision must be identified, and the process each group uses must be described. Wiegers suggests the monitoring of defects uncovered during development and testing could be used as a quality indicator, while historical data about defect densities from earlier projects can be used to calculate the number of bugs that would probably be buried in the next product. Testing-related release criteria recommended by the author include 100 percent successful integration and system test cases; successful system and user acceptance tests for specific functionality; fulfillment of predetermined testing coverage targets for code or requirements; and a mean time between failures of at least 100 hours. Satisfaction of requirements for critical quality attributes (reliability, security, integrity, portability, usability, efficiency, interoperability, etc.) is also important. All products demand a minimal set of functionality that delivers appropriate customer value, and prioritizing requirements thoughtfully enables developers to rapidly roll out useful products and reserve less crucial and important needs for subsequent releases. Wiegers advises that configuration issues such as product reproducibility and proper installation across all target platforms be settled prior to rollout, and that software release and support policies and procedures be disseminated and fathomed by stakeholders.
    Click Here to View Full Article
    (Access to this article is available to paid subscribers only.)

  • "The Answer Is 42 of Course"
    Queue (06/05) Vol. 3, No. 5, P. 34; Wadlow, Thomas

    Independent security consultant Thomas Wadlow writes that the role people play in online security makes absolutes irrelevant, and he advises companies to base the defense of their security systems on the fundamental question of how the network can be designed so that is it "safe enough." Many cases of successful network intrusions stem from either lax design or highly motivated hackers, leading Wadlow to formulate a two-pronged strategy to defend against intruders with sufficient skill, motivation, and opportunity: The first goal is to design the network to require a very high level of skill and motivation for an attacker and present as little opportunity as possible for successful attacks, while the second goal is to determine where and how much effort to devote to the process. In the category of skill, questions to be asked include how hackers build their skills with off-the-shelf software; how companies can maximize the amount of skill hackers need to breach networks and minimize the amount of skill needed to operate network defenses; how the acquisition of network knowledge by attackers can be prevented; and how to tell that a network is under attack. Questions to be raised on the subject of motivation include how or why people are provoked to attack the network; whether the company's defensive actions encourage or discourage an attacker's motivation; and what would motivate people not to attempt intrusions. To keep a hacker's opportunities to attempt a break-in as low as possible, the company should clearly identify opportunities, and determine if all network entrances and exits are known and that the network is built in accordance with company assumptions through constant measurement. Because the most skilled, motivated, and opportunistic hackers often work for the company, care must be taken to establish who are trustworthy and untrustworthy employees or ex-employees, the most potentially dangerous insiders, and how to keep the people who can cause a security problem happy, engaged, and mindful of the potential for trouble as well as the fallout from an intrusion.