Association for Computing Machinery
Timely Topics for IT Professionals

About ACM TechNews

ACM TechNews is published every week on Monday, Wednesday, and Friday.


ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of ACM. To send comments, please write to technews@hq.acm.org.
Volume 7, Issue 781:  Wednesday, April 20, 2005

  • "Can Johnny Still Program?"
    CNet (04/19/05); Frauenheim, Ed

    ACM President David Patterson says the United States' poor showing in the ACM International Collegiate Programming Contest could be partly attributable to the fact that programming skill is not really a matter of national pride in America. He notes the contest's past foreign winners have received personal commendations from their countries' leaders, and thinks that a similar reward system in the U.S. would be beneficial to American programmers, and programming in general. Patterson also says America's long-standing status as the world's computer and software industry leader has bred complacency. He says other countries may have performed better in the competition because they teach programming in a different way than it is taught in the U.S., and wonders whether U.S. programmers' skills might improve if American colleges and universities broadened their programming focus. Patterson also perceives a link between declining or flat levels of IT research funding in the U.S. computing industry and American schools' under-emphasis on math and science. "We've concentrated so much in removing margin out of everything that there's not the money around that there used to be to be able to do the research--that funded the Bell Labs and the Xerox PARCs in the past," he says. Patterson believes that cutbacks in federal IT research funding are also having an adverse effect. And he adds the trend to offshore IT work, although not as dire as many people think, is discouraging U.S. students from pursuing careers in programming and other IT fields.
    Click Here to View Full Article

  • "Carter-Baker Commission Weighs U.S. Voting Changes"
    Bloomberg (04/18/05); Arnold, Laurence

    Underfunding and a lack of paper ballots were some of the problems with America's voting system cited by witnesses appearing before a commission headed by former president Jimmy Carter and former Secretary of State James A. Baker III at an April 18 hearing. "The lack of money is the single most compelling explanation for the incompetence that might exist in election administration," stated witness and Santa Monica election law attorney Colleen McAndrews. Stanford University computer science professor David Dill told the commission that electronic voting does not allow voters to confirm that their votes were accurately recorded, nor does it assure them that their votes are unalterable once they are cast. A vote-by-vote paper trail advocated by Dill and other computer scientists and voting-rights activists is currently mandatory in a dozen U.S. states, while supporters of paperless voting systems cite the machines' ability to automatically detect over-voting and their adaptability for handicapped or non-English-reading voters. Carter said during a post-hearing press conference that the commission could conceivably recommend an e-voting system that leaves a paper trail, although no specific technology will likely be endorsed. Another proposal presented to the commission was the designation of Election Day as a national holiday, which McAndrews claimed would help improve voter turnout, reduce crowding, and eliminate understaffing at polling places. The 2002 Help America Vote Act allocated $300 million for 30 states' election modernization efforts, but witnesses at the hearing said the legislation should merely serve as a jumping-off point for initiatives to increase turnout and lessen the likelihood of election fraud.
    Click Here to View Full Article

    To read about ACM's e-voting activities, visit http://www.acm.org/usacm.

  • "Summarizer Ranks Sentences"
    Technology Research News (04/27/05); Patch, Kimberly

    A University of Michigan research project funded by the National Science Foundation has yielded LexRank, a new method for summarizing multiple documents on the same topic by ranking sentences according to their importance. The technique, which will be incorporated into the researchers' NewsInEssence Web site, uses a lexical centrality algorithm that measures how lexically similar sentences are. "Lexical similarity can be thought of as a measure of the word overlap between two sentences," explains University of Michigan professor of information, electrical engineering and computer science, and linguistics Dragomir Radev. Boundaries indicating the point at which two sentences start to resemble each other can be selected thanks to the algorithm, and Radev says each word's individual contribution is measured by its "relative informativeness." The system assigns importance to a sentence if it bears a similarity to many other sentences that are themselves important. "The sentences with the highest scores...are considered to contain the gist of the document and are presented as the multi-document summary," reports Radev. The professor says the research uncovered a resemblance between the language patterns in the LexRank technique and apparently unconnected natural occurrences such as link patterns among Web pages, electrical components, and social exchanges. Radev says the lexical centrality algorithm could potentially be applied to automatic translation and question answering.
    Click Here to View Full Article

  • "Next Gen Weighs a 'Secure' Future"
    Wired News (04/18/05); Zetter, Kim

    Four Seattle teenagers discussed topics such as privacy, blogging, and First Amendment rights in a panel moderated by UC Berkeley graduate student Danah Boyd and Electronic Frontier Foundation attorney Kevin Bankston at ACM's Computers, Freedom, and Privacy conference last week. Seventeen-year-old Morgan said he does not like the idea of companies keeping tabs on his location through his cell phone, which in his opinion constitutes a breach of privacy, while Elisabeth, 17, argued that location tracking should be a feature the user controls. Morgan spoke out against the idea of schools filtering the online content students can access, arguing that "schools should not be making moral decisions for their students." Most of the panelists agreed that blogging allows teens to safely vent their feelings; in response to Boyd's comment that teenage blogging is frowned upon because of the risk of exposure to online predators, Elisabeth and Steve, 17, countered that there is little risk as long as bloggers practice common sense, such as posting bogus information about their identities to protect themselves. Cathy, 17, said the First Amendment is not fully understood or valued by teenagers, who have really never had to live with any restrictions. The panelists were divided on how their generation values privacy: Morgan said privacy is an important concern among his age group, while Cathy claimed that apathy is rampant because teens are out of touch with politics and government; Max, 13, said his peer group also ascribes little relevance to privacy, as the issue has never had much of an impact on them. Steve and Morgan argued that parents should discuss Internet privacy with their kids more. Elisabeth remarked, "it's really hard for parents and educators to talk to us like they understand [technology] because it doesn't seem like adults are using these things in the same way that we are."
    Click Here to View Full Article

  • "Making Video Easier to Search and Find"
    IST Results (04/19/05)

    Basic components of a system for searching, retrieving, and delivering video content from PCs and mobile devices comprise the end result of the IST BUSMAN project, which is considered to be one of the most outstanding research and development projects underwritten by the European Union's Fifth Framework program. BUSMAN, which was completed last December, yielded an MPEG 7-based content management system and toolkit. Project participant Simon Waddington with Motorola Labs says end-user input was courted at the very beginning to determine the system's requirements; he says BUSMAN's ideal users are smaller content providers, while average users can employ BUSMAN to easily access video content via a standard PC or mobile phone. Waddington notes that BUSMAN's novelty is derived from the diverse ways the system can be used to tag and retrieve video content, which include semantic keyword- or free text-based searching and query-by-example. The BUSMAN system also assists with intellectual property rights management: Waddington says content creators can use advanced watermarking methods to add almost invisible labels to the video content. "When the user views the video, the system can extract the Digital Item Identifier and thus provide a link to a rights management page," he explains. An important element of the BUSMAN system is the "relevance feedback" method, which allows users to supply feedback on how relevant retrieved images are to their original query. BUSMAN could be used to search through video content such as football matches, city guides, music videos, instructional videos, or history simulations for architectural sites. BUSMAN project partner Queen Mary University in London has incorporated the technology into its educational programs.
    Click Here to View Full Article

  • "Studies Recharge Computer Science"
    Yale Daily News (04/20/05); Poppick, Susie

    Yale's Computer Science Department is developing a wide range of new technologies all focused on delivering practical applications. Computer science professors Holly Rushmeier and Julie Dorsey head a lab working on interrelated projects for devising new computer graphics applications, among them new techniques for creating and editing geometric models from 2D sketches, and the accurate virtual reconstruction of historical sites and objects. "Our overall goal is to make graphics easier for people to use and create," says Rushmeier. Meanwhile, computer science professor Brian Scassellati's lab is developing robots such as AK Watson, a humanoid machine that can determine a person's emotional state from the tone of their voice, recognize itself in a mirror as well as other people, and imitate an infant's movements with basic hand-eye coordination. Scassellati also is working on a joint venture between his lab and the Yale Child Study Center that seeks to collect more qualitative clinical data about autism in children through the use of robots such as Watson. Another Yale Computer Science Department project is FLINT, which is coordinated by department director of undergraduate studies Zhong Shao. FLINT's goal is the development of technology that guarantees bug-free, secure, and reliable software, with an eye toward commercialization. "Instead of fixing buggy software after the fact...we develop new technologies that allow people to write what we call 'certified software,'" Shao says; certified software would feature a logical proof that could be verified by a "proof-checker" program.
    Click Here to View Full Article

  • "EU Task Force to Study IT Critical Infrastructure"
    IDG News Service (04/18/05); Blau, John

    The European Union has organized a task force to identify the 25 member states' efforts to protect the critical infrastructure against cyberthreats, as well as determine the needs of telecom operators, power companies, and other critical-infrastructure providers. The task force is part of the EU's two-year Critical Information Infrastructure Research Coordination (CI2RCO) project announced on April 15. "We want to bring together experts across the European Union, learn more about their programs and how we can cooperate in curbing what we view as a global problem," says Fraunhofer Institute for Secure Information Technology (SIT) director Paul Friessem. He says the task force also intends to work with experts from America, Australia, Canada, and other nations outside the EU. SIT's fellow CI2RCO collaborators include the German Aerospace Center, Industrieanlagen-Betriebsgesellschaft mbH, Ecole Nationale Superieure des Telecommunications, the Netherlands Organization for Applied Scientific Research, Ernst Basler+Partner, and the Italian National Agency for New Technologies, Energy, and the Environment. Friessem says the task force intends to give the European Commission a precis of the critical infrastructure security research situation over the next several months so that officials in Brussels can resolve the issue in the impending Seventh Framework Program.
    Click Here to View Full Article

  • "Collaboration to Create Better Computer Systems"
    Vanderbilt Hustler (04/18/2005); Barber, Hedda

    Computer science professors and students from Vanderbilt University will work with power grid and telecommunications companies to build computer systems that are more reliable. The university's Institute for Software Integrated Systems (ISIS) will work with Oak Ridge Laboratory, the Tennessee Valley Authority, and BellSouth to build more trustworthy systems. ISIS will participate in the Team for Research and Ubiquitous Secure Technology (TRUST), thanks in part to a $3 million NSF grant over the next five years; eight other schools are also involved. "TRUST is a highly visible national center which brings together the best scientists and research groups to solve a problem of national importance," says ISIS director Janos Sztipanovits. In addition to building test beds that assess system risks and models that analyze failures, ISIS creates backups and new technologies for responding to problems. "It will lead to a broader interpretation of the engineering curricula by using a more holistic approach to systems design incorporating elements of economic and public policy and social issues," Sztipanovits notes.
    Click Here to View Full Article

  • "Supercomputing Power Made Real"
    BBC News (04/17/05); Twist, Jo

    Making supercomputing applicable to everyday real-world problems is the goal of IBM's new Center for Business Optimization, which is directed by professor Bill Pulleyblank, former leader of IBM's Blue Gene project. Supercomputers are currently being used in such areas as climate forecasting, tsunami prediction, and protein structure modeling, but the technology's maturation will expand supercomputing's presence to postal services, car safety simulations, the special effects industry, and other sectors. Pulleyblank is particularly enthused about the impact supercomputing power could have in the health care industry, and foresees surgical simulations through manipulation of real-time 3D imagery, more targeted and precise treatments resulting from the digitization and integration of patient medical histories, and DNA modeling and analysis. The IBM Blue Gene system is the world's fastest supercomputer with a peak performance of 135.5 teraflops, which is expected to increase to 360 teraflops in 2005. The rise in chip performance that made Blue Gene possible stems from Moore's Law, the computing industry's performance benchmark. Moore's Law author and Intel co-founder Dr. Gordon Moore predicts that the physical limits of silicon-based chip technology will be reached within one to two decades, and researchers around the world are investigating alternatives such as nanotechnology and quantum computing. Pulleyblank, however, maintains that silicon's reliability will make it the technology of choice for high-performance supercomputers for the ascertainable future.
    Click Here to View Full Article

  • "The Art of Mobile Technology"
    Boston Globe (04/18/05) P. D1; Perlman, Stacey M.

    The expansion of practical mobile phone applications is helping spawn new forms of creative expression and communication. One example is Yellow Arrow, a New York City-based public space art project in which participants post yellow, arrow-shaped stickers around the metropolis to mark locations of personal significance; each sticker sports a unique code that can be transmitted to a cellular phone through text messaging, allowing others to access the poster's message. A similar psychogeographic project based in Canada features ear-shaped signposts encoded with site-related anecdotes that cell phone users can listen to by calling a telephone number and entering the sign's particular code. The project's creative director, Shawn Micallef, says the goal is to add mystique to locations: "People ignore a lot of stuff in our surroundings, but once you lay a narrative on it, it becomes a place," he explains. The field of psychogeography was started about five decades ago, but Glowlab.com creative director Christina Ray notes that psychogeographic projects have only recently expanded into public spaces. "Sometimes the more you need people to participate, the harder it is to start them," she says. Another project of note is Grafedia, in which cell phone users can send blue, underlined words posted anywhere to a Web address and receive an associated image in return. Grafedia creator John Geraci, a New York University grad student, says the project demonstrates the ubiquity of the Internet.
    Click Here to View Full Article

  • "New Sensors Detect Speech Without Sound"
    ABCNews.com (04/19/05); Eng, Paul

    The Defense Advanced Research Projects Agency's (DARPA) Advanced Speech Encoding project is developing non-acoustic sensor technologies that could have uses outside of the military communication applications they are designed for. The initial phase of the project yielded an "active noise cancellation" headset that California-based AliphCom is marketing for consumer cell phones; the device is equipped with a conventional microphone and a sensor that measures vibrations from a person's jaw, and speech is distinguished from background noise via processors that compare the electric signals produced by the mike and the sensor and generate a signal that filters out the noise. Another technology being developed under the aegis of the Advanced Speech Encoding project is the Tuned Electromagnetic Resonance Collar (TERC), a capacitor worn around the neck that is sensitive to tiny movements in the wearer's vocal cords. The capacitance changes are measured and processed by microchips and converted into synthesized human speech by computers running custom-made software. However, the TERC technology is still unable to detect plosives and fricatives, and the computers needed to translate the digital signals into speech are cumbersome. TERC principal investigator Donald Brown of Worcester Polytechnic Institute notes that although DARPA has ceased funding for further research, the technology's potential applications beyond clear communications could renew interest. DARPA's Jan Walker says the planning of the Advanced Speech Encoding project's second phase is underway. Prospective technologies include sensors from NASA's Ames Laboratory that use electrodes for detecting sub-vocal or even silent speech.
    Click Here to View Full Article

  • "Some Say ICANN Too Heavy-Handed"
    United Press International (04/18/05)

    ICANN's recent decision to approve two new top-level domains--.travel and .jobs--has once again sparked debate over the organization's domain-approval process. The Progress and Freedom Foundation (PFF), a non-profit think tank that supports free markets and limited government intervention, has criticized ICANN for blocking the efforts of some companies to introduce new domains and services. "If there are companies that want to run those domains and they meet some sort of minimal qualifications, these things should be approved as a matter of course," says PFF vice president of research Tom Lenard. But while the PFF supports the approval of more domains and services such as VeriSign's SiteFinder service, which last year sparked a lawsuit between the domain operator and ICANN, the National Research Council and ICANN officials say the Internet's security and operations could be compromised if new domains were approved too quickly. Both the National Research Council and ICANN recently released reports cautioning that the rapid addition of new generic domains could overload root servers. However, this concern has also been disputed among Internet experts. Karl Manheim, communications law professor at Loyola Law School in Los Angeles, says some parties, "including the engineering community, feel that the pace of expansion could pick up substantially without jeopardizing the root or DNS."
    Click Here to View Full Article

  • "Robot Walks, Balances Like a Human"
    CNN (04/18/05)

    University of Michigan scientists say that have created a robot called "Rabbit" that is the first to resemble a human in the way it walks and balances. Instead of feet, Rabbit has stilts that can pivot on a point, and if the robot is pushed, it can step forward and regain its balance. Rabbit's locomotion is based on a theory described in the recent issue of the International Journal of Robotics Research. "It's a matter of understanding enough about the dynamics of walking and balance so that you can express with mathematical formulas how you want the robot to move, and then automatically produce the control algorithm that will induce the desired walking motion on the very first try," says Jessy Grizzle, a professor of electrical engineering and computer science at the university. Grizzle believes more affordable human prosthetics and rehabilitative walking aides for spinal injury patients could result from the development. The concept also has potential applications in stair-climbing machines for the home, and in robots for navigating difficult terrain.
    Click Here to View Full Article

  • "Captcha the Puzzle"
    Science News (04/16/05) Vol. 167, No. 16; Peterson, Ivars

    Over the last few years, computer scientists have developed CAPTCHAs (Completely Automated Turing Tests to Tell Computers and Humans Apart) as a security measure which uses computer programs that automatically generate and grade puzzles that most people can solve without difficulty, but that current programs cannot. One type of CAPTCHA puzzle presents distorted text that users must decipher, while others present pattern recognition problems, distorted imagery, or even sound puzzles. CAPTCHAs were originally developed by IBM's John Langford and Carnegie Mellon University's Luis von Ahn, Manel Blum, and Nicholas Hopper as a solution to the problem of spammers using bots to automatically sign up for scores of free email accounts for the purpose of distributing junk mail. Yahoo! and other companies now employ CAPTCHAs to confirm that real people are participating in Internet transactions, email account registrations, online voting, and other activities. Von Ahn, Blum, and Langford write in Communications of the ACM's February 2004 issue that image- and sound-based CAPTCHAs are inaccessible to visually and hearing impaired Web users. Meanwhile, a recent paper published in the College Mathematics Journal reports that certain kinds of text-based CAPTCHAs can be cracked using fairly basic mathematical methods, and the authors recommend the use of nonstandard fonts to eliminate such a vulnerability. Von Ahn and his collaborators think the arms race between CAPTCHA authors and CAPTCHA hackers is positive no matter what the outcome. "Either the CAPTCHA is not broken and there is a way to differentiate humans from computers, or the CAPTCHA is broken and a useful [artificial intelligence] problem is solved," they write.
    Click Here to View Full Article

  • "What IT Women Want"
    Computerworld (04/18/05) P. 33; Melymuka, Kathleen

    A virtual roundtable of successful businesswomen moderated by Kathleen Melymuka discussed the challenges faced by women in IT and what recruiters and employers should do to attract and retain them. Scites Associates President Jan Scites said "the fundamental issue for women is that very few are going into IT," while Walk & Associates President Mary Anne Walk warned that a lack of sufficient development of women in all professional sectors will lead to a 35 million-person labor shortfall by 2031. Walk added that IT organizations are still male-centric and not quick to accept women's views, while consultant Kim Shand said that many IT organizations are in the dark about IT women's requirements and are not actively trying to learn what those requirements are. Analyst Dorie Culp said most women's problems in IT are derived from the prevailing business culture, which marginalizes women and either lacks flexible work policies or discourages women from taking advantage of such policies. The panelists recommended strategies that IT organizations could and should follow to hire and retain women, including mentoring, training programs that emphasize functional and leadership skills, flexibility, community development, the establishment of female role models, and active research into women's needs. The forum advised IT-career-minded women to understand the business and what it demands, and acquire the training to be able to satisfy those demands; to be flexible and create flexibility via technology; to network, learn to communicate effectively, and practice solidarity with fellow women in IT; and to deliver results. Sylvia Weaver said IT managers should realize that women can play a key role in understanding and translating business needs into technology, while Culp recommended that managers court women's input.
    Click Here to View Full Article

    For information on ACM's Committee on Women in Computing, visit http://www.acm.org/women.

  • "Why George Bush Needs a Technology Czar"
    CIO (04/15/05) Vol. 18, No. 13, P. 50; Worthen, Ben

    Despite rumblings from CIOs and technology policy experts that the United States is in danger of losing its global lead in tech innovation to Asia, the federal government still has not placed a high priority on devising a comprehensive tech innovation agenda that sets specific milestones for issues ranging from Internet access to spam to cybersecurity. Many agree that appointing a tech czar is necessary if the country wishes to retain its innovation crown, and the office's responsibilities would include outlining a strategic U.S. tech plan, managing a portfolio of government-funded R&D projects, and regulating tech policy across federal agencies such as the FCC, the FTC, and the Homeland Security Department. The establishment of a central agency to coordinate tech issues government-wide is favored by 63 percent of 402 CIOs polled by the CIO Executive Council. They believe such an agency would make the government more capable of arriving at smart IT policy decisions as well as more responsive to the concerns of American companies that are increasingly dependent on IT. As R&D project portfolio manager, the tech czar would not affect the decentralized nature of the current system, which permits established experts to focus on what they do best. In addition, the existence of the White House Office of Science and Technology Policy provides the bureaucratic structure to create a tech czar. A 1993 executive order authorizing the presidential appointment of a Cabinet-level presidential science and technology assistant to lead the National Science and Technology Council would enable the czar to coordinate tech issues across agencies. In order that the tech czar might influence agency R&D budgets, the appointee must be allocated a budget sufficient enough to encourage the appropriate agency to pour more money into underfunded projects.
    Click Here to View Full Article

  • "Guide to Speech Standards"
    Speech Technology (04/05) Vol. 10, No. 2, P. 16; Dahl, Deborah

    Speech technology standards offer a sturdy infrastructure for platforms and applications because they can augment interoperability, lower technical risk, and reduce costs. Standards bodies such as the World Wide Web Consortium (W3C) and the Internet Engineering Task Force have developed or are developing authoring standards such as speech interaction, speech input, speech output, and speaker authentication, as well as communication standards such as distributed speech functions and distributed speech recognition. The VoiceXML speech interaction standard, the oldest and most established standard of its kind, supports speech-only applications, while the open specifications of XHTML + Voice, Speech Application Language Tags (SALT), and eXtensible Human-Machine Interface are designed to support multimodal applications, both speech-only and multimodal, and VoiceXML and SALT, respectively. Speech input standards such as Speech Recognition Grammar Specification and Semantic Interpretation for Speech Recognition allow systems to understand and act on the intentions of the user who talks to the system, while speech output standards such as Speech Synthesis Markup Language and Pronunciation Lexicon dictate how the system renders output by either a text-to-speech engine or audio files. Speaker authentication standards such as the W3C Speech Interface Framework allow systems to determine and verify the identity of the person speaking. Communication standards are employed in communication among software components, examples of which include Extensible Multi-Modal Annotation for representing both a user's utterance and additional data about the utterance; the Media Resource Information Protocol for decoupling speech functions from their platforms by supplying standard protocols for component interaction; and the Aurora standard for supporting the division of the speech recognition function into local and remote processes.
    Click Here to View Full Article

  • "Web Service References"
    Internet Computing (04/05) Vol. 9, No. 2, P. 94; Vinoski, Steve

    Registering service references in notification systems is often necessary for receiving notification messages, and IONA Technologies' Steve Vinoski, a member of the WS-Addressing working group, details several issues that must be addressed in order to transform the endpoint reference (EPR) specified by the WS-Addressing specification into a flexible and practical Web service reference. He writes that technology stovepipes can be avoided through multi-technology services, and it follows that references for such services must carry all means of access that a service wishes to apprise its customers of. The WS-Addressing EPR does not permit multi-port access to a service, and Vinoski indicates four strategies for addressing this problem: EPR structure enhancement using additional port data; the standardization of a policy assertion that could contain additional optional port data within the reference; employment of WS-MetadataExchange (WS-MEX); and the establishment of an independent structure that can hold multiple EPRs. Drawbacks to the WS-MEX strategy include the need for fairly costly network operations to support the retrieval of additional port data, and the strain the approach places on legacy services. Vinoski notes that opinion is split among proponents as to whether the WS-Addressing spec should support only Simple Object Access Protocol (SOAP) or Web Services Description Language (WSDL). He reports that "As underlying technologies like SOAP inevitably come and go, WSDL's extensibility can accommodate them, and the end result is a strong and clean separation between business application logic and the technologies that implement it." Vinoski believes the WS-Addressing EPR can satisfy both camps and serve as a standard Web service reference, once it is properly enhanced to permit multi-port services.
    Click Here to View Full Article

  • "Advancing Sensor Web Interoperability"
    Sensors (04/05) Vol. 22, No. 4, P. 14; Gorman, Bryan L.; Shankar, Mallikarjun; Smith, Cyrus M.

    The goal of the SensorNet project is to develop the components of a nationwide system for real-time detection, identification, and evaluation of various threats through an interoperable framework that collects and integrates sensor data from all over the country. SensorNet is a joint project of the Oak Ridge National Laboratory's (ORNL) Computational Sciences and Engineering Division, the National Oceanic and Atmospheric Administration, the Open Geospatial Consortium, the U.S. Defense Department, the National Institute for Standards and Technology, and academic and private-sector collaborators. The creation of an open standards framework for compatible sensor networks requires a standard technique for linking transducer interfaces and application interfaces, and SensorNet's solution for transducer interfaces is adopt the scheme supported by IEEE 1451 working groups devising plug-and-play standards for smart transducers; its solution for application interfaces is based on Web services. Once standards are developed, customers must step up and invent a market for SensorNet technologies. Both program managers and sensor manufacturers require a clear adoption road map and must discern market indicators signifying the standards framework will satisfy future requirements and expand the market. ORNL has developed prototype SensorNet nodes and test beds that employ off-the-shelf hardware and software. These test beds execute various types of queries for diverse types of applications and help address issues such as what services must exist, data storage, and parsing SensorML documents to extract real-time performance.
    Click Here to View Full Article