Association for Computing Machinery
Timely Topics for IT Professionals

About ACM TechNews

ACM TechNews is published every week on Monday, Wednesday, and Friday.


ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either AutoChoice Advisor or ACM. To send comments, please write to technews@hq.acm.org.
Volume 7, Issue 762:  Monday, March 7, 2005

  • "Vote for Change"
    Federal Computer Week (03/07/05); Hardy, Michael

    New federal rules for voting machines are on track to be enacted before the 2006 mid-term elections, while electronic voting machine companies are preparing products that will be certified under 2002 federal standards. However, whether e-voting systems will be more secure and reliable by the time the next round of elections comes up is a matter of conjecture: Security Innovation's Herbert Thompson says "the systemic problems...are going to be really difficult to remedy in a two-year period." Rep. Rush Holt (D-N.J.) has reintroduced a bill calling for the provision of voter-verifiable paper trails and other measures for boosting voting accuracy and security; he says the success of systems with voter-verified paper trails in Nevada during the November 2004 election should raise the proposal's chances of garnering bipartisan support. Officials of e-voting vendors say their companies are adding paper-trail generation capability to their products even in the absence of a federal mandate for paper records, so that they are able to provide them in places where they are required. Election Systems & Software and Diebold Election Systems are working to increase the security of their touch-screen machines, and have introduced electronic poll books that allow poll workers to look up data about voters who show up at the wrong polling place, and direct them to the proper location rather than issuing a provisional ballot. Diebold's Mark Radtke says the company has incorporated dynamic passwords for elections administrators as well as dynamic encryption keys into its products. Populex President Sanford Morganstein says paper jams, added costs, and other problems some vendors cited as reasons why the addition of paper trails was a bad idea have turned out to be baseless.
    Click Here to View Full Article

    For information on ACM's e-voting activities, please visit http://www.acm.org/usacm.

  • "The Bleeding Edge of Computing"
    NewsFactor Network (03/04/05); Baker, Pam

    Computing technology is expected to make radical leaps in the future, thanks to progress in many areas of research. Thad Starner with the Georgia Institute of Technology notes that artificial intelligence systems have begun to perceive the world much like humans do using cameras and microphones on wearable devices, and predicts that computers may soon be able to monitor users' activities, understand the things that matter to them, and serve as virtual aides. He expects wearable technology to advance to such a degree that everyone will own a versatile virtual personal assistant, and Starner foresees an expansion in the functionality of MP3 players and their possible incorporation into a general-purpose wearable device. Another significant AI breakthrough is GTMax, an unmanned, autonomous mini-helicopter that can learn as it flies, execute aggressive maneuvers, and automatically map out a path through obstacles with the help of an Open Control Platform (OCP) system, a real-time, object-oriented operating software architecture; the device has adaptive flight control and can identify flight control faults and reconfigure itself in real time. Meanwhile, Guosong Liu with MIT's Picower Center for Learning and Memory has discovered that neurons process information in trinary code rather than binary code, as computers do. Analysts expect trinary processing to be incorporated into future hardware and software. Liu's breakthrough supports the theory that the basic unit of computing may be something even smaller than the cell, perhaps a "natural nanotechnology" that chipmakers are currently trying to tap. Georgia Tech researcher Doug Blough anticipates a commercial rollout of quantum computing technology within a decade.
    Click Here to View Full Article

  • "EU Patent Law Could Impede Open Source"
    eWeek (03/04/05); Broersma, Matthew

    Attorneys warn that the European Union's proposed directive on "computer-implemented inventions" legitimizes software patenting, which will enable patent holders to hinder the progress of smaller software companies and open-source developers. For any developer doing business in the EU, the proposal's adoption could either bolster the protection of their inventions, or raise the risk that their inventions could infringe on the patents of rivals. In addition, organizations could also be liable if they use patent-infringing software without an indemnity agreement with the software supplier. "The danger is that smaller software companies who can't afford high patenting costs will be swamped by large companies who go around patenting everything in sight," reports attorney Martin Hann. The European Commission has stated its intention to move the directive forward to the next step of the legislative process, despite the European Parliament's opposition; the proposal's formal adoption by the EU Council, which could come as soon as March 7, would be a major upset for critics, who have been pushing the EC to restart the process. European law currently bans "pure software" patents while permitting software patents that demonstrate a real-world effect, but attorney Olivier Hugot says the European Patent Office (EPO) has been relatively liberal with its interpretation of the law and granted scores of patents for inventions "embodied in computer programs." Supporters claim the proposal will unify the EU's fractured patent system, while opponents contend it will legalize EPO practices that highly favor the issuance of software patents. Hann describes the measure as "a retrenching of EU law" that would actually make software patents harder to get.
    Click Here to View Full Article

  • "Laying Foundations for Component-Based Software Markets"
    IST Results (03/07/05)

    The European Commission is blazing the way for component-based software engineering with several Information Society Technologies projects that address technical, economic, and organizational aspects of commercialized component-based software development. The COMPONENT+ project uses built-in test capability to ensure that commercial off-the-shelf (COTS) components are compatible with the rest of the application. Component-based software development is especially promising for smaller companies because it promises time savings, but component incompatibility can negate those benefits. COMPONENT+ built-in test components check run-time and interface issues, and promise time savings of up to 50 percent once built-in test components are integrated; European organizations such as Volvo and the Swedish National Testing and Research Institute continue COMPONENT+ research through the AGILE TESTS project. The Pervasive Component Systems (PECOS) project aims to iron out deficiencies in component-based software development for embedded systems, and takes into account non-functional requirements for more complete behavioral testing in the design phase. The PECOS development environment helps embedded systems developers check their real-time system works as intended. Finally, the European COTS User Working Group (ECUA) has advanced discussion of how to use COTS components among European companies and other users worldwide; topics discussed include component specification, management of component-based development, legal issues, regulation, standards, and strategic partnerships. ECUA now includes more than 160 organizations and the ECUA workshop has become an annual event.
    Click Here to View Full Article

  • "Space Snakes and Scorpions"
    Wired News (03/07/05); Asaravala, Amit

    Scientists in NASA's Autonomy and Robotics group are working on improved intelligence and efficiency in new generations of robot vehicles for interplanetary exploration. Experimentation is proceeding on new software algorithms to enable vehicles to automate chores that are currently ground-based. It takes several days for NASA controllers operating the Spirit and Opportunity rovers on Mars to choose the best course of action for something as simple as moving toward a rock; the Autonomy and Robotics group wants to reduce this process to a matter of hours by making the rover primarily responsible for the analytical portion of the operation. The technology could be implemented in real space missions as early as 2009 or 2011, when NASA launches one or two more rovers to Mars. Other projects in the group focus on robots modeled after nature so they can traverse terrain that may be inaccessible to current vehicles. Researcher Silvano Colombano has devised two such robots: The two-foot long Snakebot, a machine comprised of a dozen metal links that wriggles in a caterpillar- or worm-like fashion; and the Scorpion bot, a eight-legged device that can crawl over rocks, scurry under ledges, and carry packages. Blimp-like robots that could, for example, float in the atmosphere of Saturn's moon Titan for years could be another area of experimentation for the Autonomy and Robotics group, according to technical area lead James Crawford. He says the team's long-term goal is to create truly intelligent robots that can understand people's activities and predict their actions.
    Click Here to View Full Article

  • "Sender Authentication Hops Off the Standards Track"
    Computer Business Review (03/04/05)

    Reconciliation between Microsoft's Sender ID Framework (SIDF) and the Sender Policy Framework (SPF) has been sidelined by technical disagreements between the two specifications, and Microsoft and the SPF community are for now taking diverging paths within the Internet Engineering Task Force's (IETF) standards process. Both parties are reportedly requesting that the IETF designate their specs as "experimental" protocols which basically would acknowledge them as pre-standards. Both proposals would have email senders place a list of authorized mail servers in their domain name system (DNS) records, allowing recipients to authenticate the senders' legitimacy in the hopes of mitigating address spoofing. The DNS lookup differs for each spec: Microsoft's Harry Katz says SPF scans the "bounce headers" in email, while the purported responsible address lookup in SIDF scans the "from headers." The SPF community seems to be split among people who are fiercely anti-Microsoft, technology purists who believe that SPF is simply better than SIDF, and those who want to reconcile the two specs by making technical concessions to Microsoft; independent developer Wayne Schlitt, who co-authored a revised SPF draft that addresses IETF concerns, opposes this last strategy, arguing that SPF is the superior spec because it causes less problems than SIDF. He also says that though neither spec is particularly good at managing forwarded email, SPF's management of mailing lists is substantially better than SIDF's. Schlitt estimates that 740,000 .com, .net, and .org domains publish SPF records, while a mere 7,000 domains explicitly support SIDF. By that reckoning, SPF appears to have the advantage if both specs are forced to fight in the marketplace for recognition as the de facto sender authentication standard.
    Click Here to View Full Article

  • "Tracking PCs Anywhere on the Net"
    CNet (03/04/05); LeMay, Renai

    University of California Ph.D. student Tadayoshi Kohno has published a paper indicating that he has devised techniques for fingerprinting computer hardware remotely, potentially allowing any physical device to be tracked wherever it is on the Internet, even without the device's cooperation. His research is expected to be detailed at the Institute of Electrical and Electronics Engineers Symposium on Security and Privacy in three months. Kohno appears to acknowledge that surveillance groups are interested in his methods, while also noting potential applications for computer forensics; they could also be used to determine whether one physical device may be trying to pass itself off as two devices on the Internet. The technique works by estimating clock skews--tiny deviations in device hardware--which a fingerprinter can determine using data within the TCP headers. "Our techniques report consistent measurements when the measurer is thousands of miles, multiple hops, and tens of milliseconds away from the fingerprinted device, and when the fingerprinted device is connected to the Internet from different locations and via different access technologies," reports Kohno, adding that these methods work even when the fingerprinted device is protected by a firewall or network address translation. Windows XP, Red Hat and Debian Linux, OpenBSD, and FreeBSD were just some of the many operating systems Kohno and his team tested the fingerprinting techniques on. "In all cases, we found that we could use at least one of our techniques to estimate clock skews on the machines and that we required only a small amount of data, although the exact data requirements depended on the operating system in question," the paper says.
    Click Here to View Full Article

  • "No Redress Route in Terror Screening Plan"
    United Press International (03/03/05); Waterman, Shaun

    The Homeland Security Department is creating a new office to handle terrorist screening programs, such as those that check domestic airline passengers, cargo shipments, port employees, and hazardous materials transport. The new Screening and Coordination Office is still more than a year away from operation, but plans have not detailed measures that clear misidentification--a point that observers say will be crucial to the office's success. The now-defunct CAPPS II failed largely because of a lack of effective redress in cases of misidentification, including several cases that caught members of Congress such as Sen. Edward Kennedy (D-Mass.). The Screening and Coordination Office's application is called Secure Flight and will have to be vetted for privacy and civil liberties protection by the Government Accountability Office; in the aviation sector, the system will rely on the FBI's Terrorist Watch List to conduct name checks instead of relying on airline personnel. That has caused concern at the Terrorist Screening Center because it would dramatically increase the number of queries to the watch list, which previously was used to investigate captured illegal immigrants or other specific persons. The limited complexity of the queries, however, will help compensate for the increased workload, estimated to be up to 1.8 million queries per day. The Screening and Coordination Office is also positioned higher up in Homeland Security, and consolidates more than a dozen screening programs while paring the mission scope that made CAPPS II such a worry. Among the proposals for a redress mechanism for Secure Flight include a 24-hour call center, but an senior official familiar with plans says any redress mechanism will rely on the FBI Terrorist Watch List, making that database more important than the chosen mechanism.
    Click Here to View Full Article

  • "AI Expert Calls for E-Defense for the UK"
    Electronics and Computer Science (03/04/05); Lewis, Joyce

    The United Kingdom needs to become more of an information power, says Nigel Shadbolt, professor of artificial intelligence at the School of Electronics and Computer Science at the University of Southampton. Shadbolt, who believes the UK needs to focus more on e-defense, is scheduled to talk about the impact of artificial intelligence on the Web and what that means for the military when he delivers the British Computer Society (BCS)/Royal Signals Institution (RSI) annual lecture on Web Intelligence in London on March 9. "The military will have few options but to take advantage of the huge investment that the commercial and research sectors have made in Web service solutions and architectures," says Shadbolt. He intends to demonstrate how the evolving Semantic Web could provide Web services for the military, and the potential impact on its operations. For example, emerging information sets could provide units with instant access to a wealth of geology, geography, customs, cultural, and religious information before entering locations. Also, Web services eventually will support high-quality speech and text translation, and the Web can accommodate diagnosis, image recognition, planning, and scheduling services.
    Click Here to View Full Article

  • "IPod 'Squeaks' Betray Software Secrets"
    New Scientist (03/01/05); Knight, Will

    A 17-year-old computer science student from Germany has worked out the code that allows Apple's iPod music player to start up by listening to how the device generates "squeaks" when scrolling through the on-screen menu. Nils Schneider's impetus for tinkering with his Christmas present was to install Linux on the device, which he was unable to do because the player includes new hardware that he could not control without knowing how the unit starts up. Rather than work out the code by trial and error, Schneider decided to use the piezoelectric component of the iPod that produces the clicks, already worked out by U.K. software engineer Bernard Leach, to play the program that allows the device to start up as sound. Schneider encoded the bootloader data, then recorded the sounds onto another PC programmed to reconvert it into computer code, in a process that took more than 20 hours. The iPod Linux project ran Linux and compatible software such as games on the iPod, but Leach sees more potential in the exercise. Leach says the new OS transforms the iPod "into a general purpose device," that potentially could be used for such applications as GIS and mapping, drawing, or a calculator.
    Click Here to View Full Article

  • "Linux Security Rough Around the Edges, But Improving"
    InformationWeek (03/03/05); Greenemeier, Larry

    The National Security Agency's Security Enhanced Linux (SELinux) provides greater security for the country's computer infrastructure, and although the technology has won the support of the Linux developer community, SELinux's complexity threatens its widespread adoption by government agencies and U.S. companies. Dickie George, technical director of the NSA's Information Assurance Directorate, says, "Quality of [software] code is crucial to the security of this nation." George, speaking at a recent SELinux symposium, says the directorate exists to provide needed cybersecurity research and development for industry to use to protect the cyberinfrastructure. The NSA's mandatory access control technology, reworked for Linux, was recently added to version 2.6 of the Linux kernel; SELinux systems can partition domains, exponentially improving security by containing virus damage to one particular area. Debian, Novell, and Red Hat are currently offering customers SELinux features, but Novell says more needs to be done to make it easier to implement. Novell research and development vice president Chris Schlaeger admits that SELinux is a significant advancement in security, but says it is just too hard for the average customer to control. Nevertheless, excitement surrounding SELinux technology is growing as more agencies and enterprises adopt the technology, recognize its limitations, and work on improvements.
    Click Here to View Full Article

  • "Domain Owners Lose Privacy"
    Wired News (03/04/05); Zetter, Kim

    Last month, the U.S. Commerce Department's National Telecommunications and Information Administration (NTIA) ordered domain registrars that sell .us domain names, including Network Solutions, eNom, and Go Daddy, to stop allowing registrants to hide their contact information via proxy services. Such services have previously given owners of .us domain names an extra level of privacy by keeping their true identities and contact information out of the public Whois database. The new ruling, which the NTIA says is simply a clarification of existing policy, was detailed in a letter to .us registry operator NeuStar, and was made without warning and without consultation from the affected companies. Registrars are expected to comply with the change and update their customers' entries in the Whois database by Jan. 26, 2006, or face losing their right to sell .us domains. Go Daddy is opposing the rule change, saying it violates registrants' privacy rights. While the government claims registrants' contact information must be made public for potential law enforcement purposes, Go Daddy notes that the existing proxy system allowed officials to access the true identities of registrants via an escrow account. Go Daddy, the largest registrar of .us domains, says 23,00 of its 30,000 .us registrations use proxies. Privacy advocates are also opposing the NTIA's ruling on the grounds that it violates the right to anonymous free speech guaranteed by the First Amendment. The ruling only affects .us domains and could be challenged in court. The Electronic Privacy Information Center executive director Marc Rotenberg says anonymous Internet expression is akin to anonymous free speech, a right that's protected by the First Amendment. NeuStar's Jeffrey Neuman says the NTIA never approved proxy domains and is simply enforcing existing rules.
    Click Here to View Full Article

  • "The Semantic Aspects of E-Learning: Using the Knowledge Life Cycle to Manage Semantics for Grid and Service Oriented Systems"
    University of Southampton (ECS) (03/01/05); Tao, Feng; Davis, Hugh; Millard, David

    Applying semantics to learning content and services will allow the grid infrastructure to support large-scale collaboration of e-learning activities. The computers and people participating in e-learning activities should comprehend and communicate among themselves via a common learning domain model, in keeping with the pervasive involvement of machines and software applications needed to facilitate and effect collaboration. The authors suggest that ontology is needed in this context, which is shared as a language to semantically enrich and more effectively connect grid resources and better realize the vision of a Semantic Web. They outline a knowledge life cycle driven by semantics that accommodates the key stages in e-learning semantics management, as dictated by the pedagogical perception of semantics derived from learning domain specialists. The scheme incorporates a distributed, service oriented architecture in order to support grid infrastructure, reusable components, and simple integration later on. The researchers list ways in which semantic enrichment can enhance learning among students: It can link communities, personalize content and sequencing, facilitate adaptive evaluation, and significantly improve search engines and analytic tools. Semantic enrichment can also benefit many aspects of learning management, including the production of teaching materials, student and timetable management, record keeping, and quality assurance.
    Click Here to View Full Article

  • "Q&A: Author of 'Dude, Did I Steal Your Job?' Sounds Off"
    Computerworld (02/28/05) P. 4; Thibodeau, Patrick

    Programmer N. Sivakumar says in an interview that the point of his book, "Dude, Did I Steal Your Job?," is to explain the economic benefits of H-1B visa holders hired to work in the United States, as well as address the perception that such professionals are taking jobs away from U.S. workers. Sivakumar says these workers are encountering a lot of abuse, while their side of the story is being largely ignored. He says when he was an H-1B worker, he was paid the salary he was promised, but notes that 10 percent to 15 percent of H-1B holders end up in "body shops" that pay far below the prevailing wage. Sivakumar acknowledges that H-1Bs are being used to drive down the salaries of U.S. IT workers, but says this trend is being spurred by the increase in the supply of workers, not by the body shops. He explains that H-1B workers "bring...the same skills set U.S. workers have, where the companies don't have people to fill [the job]," noting that without H-1Bs America would face a much higher level of competition from other countries, and perhaps lack some of its most successful startups. Still, Sivakumar observes that it is wrong from both an ethical and legal point of view for employers to hire H-1B workers as replacements for American workers. He also says offshoring has a greater impact on H-1B workers than U.S. workers, especially when the companies that outsource behave irresponsibly by firing workers overnight without adequate training and preparation. Sivakumar predicts that Indians will no longer look to the United States for jobs within five years, as employment opportunities in their own country will be on a par with U.S. opportunities.
    Click Here to View Full Article

  • "Whatever Happened To...?"
    InfoWorld (02/28/05) Vol. 27, No. 9, P. 32; Venezia, Paul; McAllister, Neil; Knorr, Eric

    Examples of highly hyped technologies that have not panned out or whose rollout is slower than anticipated include mobile broadband, whose spread has been impeded by huge 3G deployment costs; however, forthcoming deployments of High Speed Data Packet Access, EvDO, and Universal Mobile Telecommunications System technology indicate that the emergence of mobile broadband is finally imminent. Practical voice-driven user interfaces may be within reach, technologically speaking, but their universal appeal remains unlikely, given their noise potential in offices and other environments where distractions can lower efficiency. Claims that the mainframe's days were numbered turned out to be premature, as replicating certain mainframe applications as Web applications is no small feat, while data-migration costs are massive; licensing, retraining, and downtime issues have also helped keep big iron viable. Microsoft's Passport, which offered a single log-in and password scheme for online identity confirmation, withered on the vine with the disclosure of major security flaws, while proposals to improve Internet security, communications, and applications have been impeded by the enormous challenges of upgrading the Net infrastructure. The vision of the paperless office is unlikely to be realized, as hot innovations such as the multifunction printer have made generating paper even easier. Artificial intelligence's only real penetration into the commercial sector thus far is in business rules management systems, whose intelligence-gathering process is too slow for project managers' taste. Business-to-business (b-to-b) e-commerce may get a boost with the growth of the modular applications and service-oriented architecture market, whereas b-to-b's earlier aspirations were unrealistic, and undercut by a lack of incentive for suppliers to participate as well as a scarcity of integration technologies five years ago.
    Click Here to View Full Article

  • "It's Raining Code! (Hallelujah?)"
    CIO (03/01/05) Vol. 18, No. 10, P. 52; Lindquist, Christopher

    CIOs are growing more interested in open-source development, and have adopted cooperative strategies to minimize the associated risks and expenses. One such initiative is the Avalanche Corporate Technology Cooperative, a collaborative development effort to identify mutually beneficial open-source development opportunities and distribute resources across its membership. In addition, Avalanche members enjoy exclusive access to the cooperative's code under the group's licensing agreement, thus removing concerns that contributing members would not be able to extract any competitive value. Meanwhile, member schools of the Sakai Educational Partners Program are attempting to supply common platforms for the development of an open-source course management system; Sakai Community Liaison James Farmer says the goal of the project is not to break the domination of commercial software vendors such as Blackboard, but to help schools save money. Another significant development is the release of open-source code by commercial vendors such as IBM and Computer Associates. However, experts such as Dresdner Kleinwort Wasserstein's Steve Howe recommend that customers carefully examine the rationale behind such offerings, as some vendors may simply be using the open source model to junk obsolete products. Some analysts say this trend reflects the erosion of certain software stack components' commercial value, and their release as open source allows their communities to better support them while also allowing vendors to channel more research and development assets into high value-add products.
    Click Here to View Full Article

  • "Caution: COTS Ahead"
    Aviation Week & Space Technology (02/28/05) Vol. 162, No. 9, P. 52; Scott, William B.; Gollings, David H.

    The aerospace and defense fields are under more pressure to adopt commercial off-the-shelf (COTS) and open source software for cost and flexibility reasons, but still find it difficult to formulate unbiased policy and keep up with standards efforts for COTS adoption. COTS software offers tremendous resources for aerospace programs, but is also such a large industry that aviation, space, and defense officials have little influence in directing its development; however, the specialized software used by the Pentagon throughout the Cold War is becoming more of a burden as officials struggle to find programmers skilled in obscure languages such as Jovial and ADA. Over the decades, many programmers opted instead for commercial work as the telecommunication, PC, and video game industries grew. The Federal Aviation Administration and military services are formulating policies and standards for COTS adoption, but the rapid pace of change is outstripping those efforts. Boeing and Airbus have proposed using Ethernet in new aircraft, and commercial software is already used for flight deck displays and entertainment systems in commercial aircraft; but vendor idiosyncrasies and hardware limitations make it difficult to certify heterogeneous systems as entirely stable, which is necessary since pilots cannot reboot critical systems in flight. U.S. Air Force Maj. Gen. Wilbur Pearson says there is currently a struggle in military circles between acquisition and contractor officials and in-house technical personnel, who have vested interests in legacy and new custom-coded systems. The space surveillance field is especially reticent about adopting COTS technology, says Situational Awareness Solutions CEO David Desrocher. Pearson says officials should relegate themselves to setting guidelines and leave implementation up to vendors.

  • "Is Realtime Real? Part 1"
    Millimeter (02/05) Vol. 33, No. 2, P. 46; Katz, S.D.

    Video game realism has taken a big step forward in the last year largely thanks to new graphic boards that facilitate more refined real-time rendering. The advances in gaming hardware and software were made possible by the advent of programmable hardware. Microsoft's High-Level Shading Language and the open-source OpenGL represent the game industry's leading basic shading languages, while Nvidia's Cg language is compatible with the others and can also be used to execute compositing and physical models. Graphic cards achieve photorealism in games by processing pixel and vertex data; the pixel calculations control surface properties such as color, reflectance, and ambience, while the vertex calculations control the deformation of models. The latest graphics cards increase the amount of pixel and vertex data that can be processed while also giving both types of real-time shaders new options. A great deal of the photorealism in next-generation games is attributable to the customization capabilities of the new shaders. The lack of programmer/artists in the visual effects industry prompted the creation of RTzen's RT/shader, which allows artists to program elements such as brightness, transparency, and reflection; the tool can also create high-level as well as low-level shaders. Major visual effects houses such as Weta Digital and Industrial Light & Magic are not taking advantage of the latest real-time rendering technologies because of aliasing, high-level surfaces, bit depth, and other challenges that the game industry has yet to address.
    Click Here to View Full Article

  • "Web Metadata Standards: Observations and Prescriptions"
    Software (02/05) Vol. 22, No. 1, P. 78; Bodoff, David; Hung, Patrick C.K.; Ben-Menachem, Mordechai

    David Bodoff of the Hong Kong University of Science and Technology, Mordechai Ben-Menachem of Ben-Gurion University, and Patrick C.K. Hung of the University of Ontario Institute of Technology examine the development efforts of Web metadata standards and prescribe solutions to various challenges. Metadata standards are divided into two categories: Process standards that define the behavior of executable processes (Web services, for instance) and effect application-to-application interaction, and product standards that describe a physical or information product and enable the exchange of the description. The authors' prescriptions for metadata standards focus on three areas: software engineering, software reuse and library science, and artificial intelligence. From the software engineering viewpoint, Bodoff, Ben-Menachem, and Hung conclude that metadata standards projects should not just concentrate on new functionality, but also take into account their effects on quality assurance, testing costs, and other long-standing problems; and that there should be detailed exploration of the trade-offs implicit in using metadata standards to fulfill multiple roles. AI and knowledge representation lessons dictate that practical abstractions and narrow domains with limited inference usage are necessary. The prescription for the first requirement is to define concepts at the M1 layer, while the prescription for the second is to keep ontology domains narrow to avoid conflicts, while restricting the use of inferences like subsumption. The researchers conclude that simplifying the search for metadata elements may require the provision of meta-metadata, guaranteeing metadata success depends on educating users and the development of practical search and navigation tools via classification and thesaurus hierarchies, and standards should not introduce additional indexes if they do not deliver marginal search advantages to users.
    Click Here to View Full Article