HomeFeedbackJoinShopSearch
Home
 
ACM TechNews sponsored by Learn more about Texis, the text-oriented database providing high-performance search engine features combined with SQL operations and a development toolkit, that powers many diverse applications, including Webinator and the Thunderstone Search Appliance.   Learn more about Texis, the text-oriented database providing high-performance search engine features combined with SQL operations and a development toolkit, that powers many diverse applications, including Webinator and the Thunderstone Search Appliance.
ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either Thunderstone or ACM. To send comments, please write to [email protected].
Volume 7, Issue 799: Friday, June 3, 2005

  • "Women in Computing"
    Red Herring (06/06/05)

    In order to increase the number of women entering computer science, more female role models and mentors are needed to up their numbers and gain a beachhead in what is traditionally a male-dominated environment, according to experts. The numbers paint a picture of an increasingly difficult environment for women as they climb the corporate ladder: Though roughly 45 percent of the U.S. professional and business services workforce is female, only 9.3 percent of board members for U.S. technology companies are women, for example; at lower levels, women comprise 10.4 percent of computer hardware engineers, 7.1 percent of electrical and electronics engineers, and 30 percent of computer and information systems managers. Google consumer Web products director and Stanford computer science graduate Marissa Mayer says her job search after graduation shocked her with the absence of women in engineering groups--oftentimes, she would have been the only woman on the team. The Association for Women in Science President-elect Donna Dean says more female role models are needed in the tech sector to encourage younger women to enter the field. Anita Borg Institute President Telle Whitney goes further, saying women need to set new precedents by designing successful new technology. Whitney has more than two decades' experience in semiconductors and telecommunications, and says women in technology currently face male-oriented workforce dynamics that favor competition over cooperation, for example. Numenta CEO Donna Dubinsky, who led Palm Computing and Handspring, says being female never significantly hindered her career because the business environment was so intense that gender simply was not an issue. New technology applications such as blogging and Web-based business could end up making room for and empowering more women.
    Click Here to View Full Article

    For information on ACM's Committee on Women and Computing, visit http://www.acm.org/women.

  • "Forward-Looking Report Released: 'Cyberinfrastructure and the Social Sciences'"
    AScribe Newswire (05/31/05)

    The National Science Foundation's "Cyberinfrastructure and the Social Sciences" workshop has released a final report that "leverages the immense expertise of NSF communities to develop useful and usable cyberinfrastructure to support breakthrough science and engineering research and education for the 21st century," according to NSF director Arden Bement. Report co-author and U.C. Berkeley professor Henry Brady said the study organizes a roadmap for how cyberinfrastructure and the social sciences can complement each other for mutual advantage. Among the critical issues the expansion of cyberinfrastructure can help address is the development of a secure information infrastructure that is more resistant to intrusion and cyberattack, the testing of new policy proposals via Web-based models to catch problems before they can cause major policy failures, and the distribution of shared public data and computing resources to handle "peak" or "urgent" demand. CISE/SCI division director Sangtae Kim stressed that the workshop also investigated cyberinfrastructure's economic ramifications and the potential for fast technology transfer, and he called the expertise of the workshop's participants particularly applicable. "We have not yet tapped the enormous potential for collaboration between these research communities to frame, build, understand, and use cyberinfrastructure effectively, and the March workshop has provided a plan and a way forward to promote that collaboration," said Wanda Ward of the Social, Behavioral, and Economic Sciences Directorate. She said the workshop played a key role in bringing the social/behavioral sciences and computer science research and education sectors together to address challenges related to cyberinfrastructure.
    Click Here to View Full Article

  • "Fancy Math Takes on Je Ne Sais Quoi"
    Christian Science Monitor (06/02/05) P. 13; Lamb, Gregory M.

    The National Institute of Standards and Technology (NIST) has evaluated machine translation programs from some 20 research groups and will publish the results later this month, but those involved say Google's new translation entry performed very well. Observers predict Google might release its new prototype translation with a Google Web browser in coming months, allowing people to surf foreign-language domains easily. The programs were given 100 news items to translate into English from Chinese and Arabic, and results were automatically evaluated by NIST's Bleu program. Google's new prototype performed well because of the massive amount of statistical data it has to draw upon; the company's algorithms have been improved by examining the equivalent of 1 million translated books. Most machine translation programs have difficulty translating proper nouns such as names and words that have not been used in context before, but Google's approach puts much more emphasis on math than traditional translation programs, which rely on tweaking by linguistic experts. Google machine translation lead researcher Franz Och points out that no one on his team can read Chinese characters, yet the prototype produces more clear translations than other programs. Cheaper and faster computing resources and the burgeoning number of Web documents have given machine translation more weight, but Systran CEO Dimitris Sabatakakis says every program uses statistical methodology; nearly every language translation program currently offered online, including Google's, is based on technology developed by Systran, which has been working in the field for 30 years. Carnegie Mellon University Center for Machine Translation professor Robert Frederking says statistical methods and rules crafted through expertise will both be important parts of future solutions.
    Click Here to View Full Article

  • "New DSL Standard Promises 10 Times the Speed"
    TechNewsWorld (06/01/05); Mello Jr., John P.

    The Very-High-Bit-Rate Digital Subscriber Line 2 (VDSL2) standard approved last week by the International Telecommunications Union (ITU) promises to deliver a tenfold increase in speed over the fastest DSL currently available, although its availability to consumers could be years away. ITU says VDSL2 will make DSL providers even more competitive by enabling them to supply high-definition TV, videoconferencing, video on demand programming, voice over Internet protocol, high-speed Internet access, and similar services over standard copper phone lines. "This new standard is set to become an extremely important feature of the telecommunications landscape," says Yoichi Maeda, who chairs the ITU group that developed the standard. Analyst Bruce Leichtman doubts that ordinary consumers will be able to enjoy the 100 Mbps peak speeds VDSL2 promises, as the cost would be exorbitant. In addition, such speeds can only be reached over short lengths of copper. VDSL2 is expected to bridge the gap between the end of a carrier's fiber and a business or residence. Some carriers are taking a "wait and see" approach to VDSL2: Verizon's Mark Marchand says his company has made no solid commitment to the technology, while BellSouth's Brent Fowler says the deployment of VDSL2 is likely several years off.
    Click Here to View Full Article

  • "Supernova Collapse Simulated on a GPU"
    Electronic Engineering Times--Asia (06/01/05); McCormick, Patrick

    Rapid advances in graphics processing have made GPUs a legitimate hardware-accelerated alternative to CPU systems, and Los Alamos National Laboratory Advanced Computing Lab researchers are working on ways to exploit GPU capabilities for general-purpose computing, writes lab researcher Patrick McCormick. The researchers recently tested a Nvidia Quadro 3400-based system using core-collapse supernova simulation from the Terascale Supernova Initiative. Generally, GPU-based systems programmed using a development environment called Scout operated about four times faster than systems using optimized 3GHz Intel Xeon processors. The Scout project at the Los Alamos Advanced Computing Lab provides application developers with a simplified development environment that smoothes over difficulties typically encountered when programming GPUs for general-purpose computing; although Scout resolves some of these problems, GPU hardware characteristics make general-purpose computing difficult in other ways, such as with limited memory sizes and lack of floating-point precision needed for some calculations. The researchers are also investigating the memory and scalability capabilities of cluster systems that link hundreds of GPUs in parallel. Research into GPU-based general computing provides insight into the future of computer architecture, especially as the trend toward multicore and multithreaded processors makes parallelism more important. It is likely that future CPUs will include GPU-like cores or that GPUs will become even more adapted for general-purpose computing.
    Click Here to View Full Article
    (Access to this site is free; however, first-time visitors must register.)

  • "Sounds of Silencers Are Loud and Clear: PCs Are Too Noisy"
    Wall Street Journal (06/02/05) P. A1; Forelle, Charles

    Hobbyists are coming up with innovative solutions to dampen the noise generated by PCs, which can be distracting and irritating for some people. Michigan architect Russ Kinder reduced the noise of his PC by immersing it in a bath of nonconductive mineral oil, while St. Louis auto mechanic Carl Bohne cobbled together a system of ducts and packed it into his computer to divert heat from the machine's microcircuitry so that the built-in fans had less work to do. Other options open to these "silencers" include the suspension of disk drives on an elastic hammock to reduce vibrations; Sorbothane sheets that can be used as insulation; and specialty products such as copper heat sinks. Niche retailers offer items such as the Totally No Noise (TNN) computer case, which disperses heat without fans. Mainstream computer manufacturers are also starting to see the value of quieter products, as designers note that noise is becoming more of an issue as PCs penetrate the living room to play digital music, video, and games. Enthusiasts share their noise-dampening tips and document their innovations on sites such as SilentPCReview.com. The site's founder, Mike Chin, records and categorizes objectionable PC noises. Among the most annoying sounds he has documented are the whines or hums produced by spinning parts or vibrating metal, as well as clicking from poorly maintained fans.

  • "Pentagon Envisions Electronic Office Assistant for Busy Human Bosses"
    Knight-Ridder Wire Services (06/01/05); Boyd, Robert S.

    The Defense Advanced Research Projects Agency (DARPA) is funding the development of an electronic office assistant that can sort email, schedule meetings, gather data for reports, make plane reservations, and perform other mundane tasks to reduce the workload for managers. Desirable characteristics of the artificial office assistant DARPA envisions include the ability to learn through observation of a human collaborator or direct instruction; awareness of events as they unfold; real-time decision-making and action; and experiential retention and recall on an as-needed basis. Such an innovation has long been a tough challenge for artificial intelligence experts, but researchers are confident that increasing speed and computer power are bringing this breakthrough closer to reality. DARPA Information Processing Technology Office director Ronald Brachman says the goal of the Pentagon's personal assistant that learns (PAL) project is to raise the efficiency and effectiveness of military decision-making across the board, and to lower human casualties. Over 20 academic institutions and research labs are participating in the PAL project, and DARPA has thus far awarded a $22 million grant to SRI International and a $7 million grant to Carnegie Mellon University. SRI's contribution is the cognitive agent that learns and organizes (CALO); CALO can suggest folders in which a manager might prefer to file his or her email, and can set up meetings among four groups of four people each. SRI AI expert Mark Drummond says CALO can learn while it is being used, and he foresees the system monitoring meetings and recalling attendees for future reference. Carnegie Mellon's RADAR assistant is being designed to understand bosses' preferences and activities and how these change as time passes.
    Click Here to View Full Article

  • "Forging an Anti-Terrorism Search Tool"
    CNet (06/02/05); Olsen, Stefanie

    Researchers at the University of Buffalo have developed a prototype search engine that mines a collection of documents for associated ideas or links that would otherwise be unnoticeable or that would take an extremely long time to uncover via conventional investigative methods. The technology, which is known as a concept chain graph, finds the optimal path for connecting two different concepts using different mathematical algorithms, and then ranks the connections from strongest to weakest. The search engine first analyzes a limited set of documents, indexing each document and identifying key concepts along with important ideas related to the intelligence community; the system then maps the links to establish a chain of evidence between a pair of ideas. Many search engines use hypertext links to forge connections between documents or query terms, but the UB system relies on textual analysis. The project was partly funded by the National Science Foundation for anti-terrorism purposes, and was first used to mine data within the 9/11 Commission report and public Web pages associated with the terrorist bombings. The search tool has been developed over the past two years by UB computer science professor Rohini Srihari and her team in the UB School of Engineering and Applied Sciences' Center of Excellence in Document Analysis and Recognition. Srihari says a deliverable system should be ready for the FAA and the intelligence community before 2006.
    Click Here to View Full Article

  • "Self-Wiring Supercomputer Is Cool and Compact"
    New Scientist (05/31/05); Knight, Will

    Researchers at Edinburgh University's Edinburgh Parallel Computing Center are building a 1 teraflop field programmable gate array (FPGA) supercomputer designed to be up to 100 times more energy-efficient and dramatically more space-efficient than a conventional supercomputer of equal computing power. The machine will incorporate 64 FPGA chips, each of which can be reconfigured by software to perform more specialized computer processing chores. A configured FPGA chip is smaller, faster, and less power-consumptive than a conventional microprocessor; moreover, FPGA chips' energy efficiency obviates the need for specialized cooling systems. Programming FPGA chips is a formidable challenge, since a programmer must know how to modify the foundational hardware to optimize performance. But the FPGA High Performance Computing Alliance is targeting the development of software tools that allow programmers to more easily churn out code for FPGA chips. Upon the FPGA supercomputer's completion, the designers will attempt to migrate existing supercomputer programs onto the hardware using the software tools. "If we can get these [programs] to work, we'll know that we have a general purpose solution," says Edinburgh University's Mark Parsons.
    Click Here to View Full Article

  • "In-Flight Voice and Data Communications Takes Off"
    IST Results (06/03/05)

    Researchers in the IST-funded WirelessCabin project developed and successfully tested a wireless in-flight communications network architecture that does not disrupt mission-critical aircraft safety systems and terrestrial networks. Three central elements constitute the WirelessCabin infrastructure: The cabin segment (consisting of local access and service integration), the transport segment (comprised of one or more satellite or earth-based systems), and the ground segment (the service provider, the public network, and the passengers' home network for IP and third-generation services). Design requirements of the architecture include the system's independence from the satellite transport network; features that can be customized for different aircraft; support for several satellite segments according to flight route; support for multiple satellite segments by communication service providers; variability of the satellite segment in conjunction with the aircraft's communications capabilities; ground segment-supported roaming; and multiple business models and charging options. Project coordinator Axel Jahn with TriaGnoSys says the WirelessCabin system is particularly advantageous in that it permits priorities to be diversely meted out to different user groups. WirelessCabin took an open standards approach to networking and protocols, employing standard IP protocols for interface communication so that convergence for disparate wireless access technologies was assured. The project has made a major contribution to the promotion of in-flight mobile phone communication and Internet access, in addition to the development of wireless communication technologies that can be used within an aircraft cabin. "People expect to see these ideas become working services within one or two years," says Jahn.
    Click Here to View Full Article

  • "Has Ransomware Learned From Cryptovirology?"
    NewsFactor Network (06/02/05); Young, Adam L.

    The Trojan recently reported in the media to hold victims' data hostage is probably not a true cryptovirus, writes infosec researcher Adam Young, who pioneered cryptovirology research along with his Columbia University professor Moti Yung. But the news shows criminal hackers are likely to begin wielding cryptographic tools more frequently in their activities, especially public-key cryptography. According to the Associated Press and F-Secure, the so-called "Ransomware" attack was actually easily foiled--F-Secure said its anti-virus product was able to detect the Trojan and decrypt the hostage files; however, cryptoviruses such as those demonstrated in Young's research promise to be much more powerful because they leverage pubic-key cryptography instead of symmetric encryption alone. With true cryptoviruses, victims would necessarily have to cooperate with the hacker to decrypt the symmetric key using the hacker's private key. Young wrote his thesis on cryptovirus attacks in 1995 and published a paper together with Yung at the 1996 IEEE Symposium on Security & Privacy, and over the next decade they gathered more research and evidence of cryptovirus attacks and documented attempts to hold data hostage. In February 2004, the researchers published their compiled work in the book "Malicious Cryptography: Exposing Cryptovirology." Because of his experience in the field, Young warns that it is only a matter of time before an attacker develops and releases a true cryptovirus or cryptoworm that could affect thousands of users. He urges the IT industry to take previously collected research seriously and begin building in defenses against such attacks.
    Click Here to View Full Article

  • "What's the Next Big Thing?"
    Electronic News (05/27/05); Sperling, Ed

    A roundtable discussion of future consumer electronics covers such issues as phone-multimedia convergence, Wi-Fi, and digital rights management. Wolfson Microelectronics CEO David Milne expects the next big thing to be the integration of phones and multimedia, and says the winning product in this trend is the phone that absorbs the personal digital assistant, rather than the other way round. Portal Player's Michael Maia does not foresee the all-in-one smart phone making a big impact, and argues that the major drive will be for technologies that address specific needs. International Data's Allen Leibovitch concurs, predicting that "There will be convergence, but each device will be targeted toward a specific technology and do one thing really well and a bunch of things okay." Lexar Media technology director Jarreth Solomon says the mass adoption of new consumer products will depend on how easy to use the user interface is. Milne expects Wi-Fi will be important as a key technology for delivering digital content to portable devices through the Internet rather than through the phone network, though Solomon sees the need for a standard for consumer electronics to download media through a supportive network. He adds that a universal plug-and-play model is on the horizon, but standards are necessary to make it workable. Maia predicts that digital rights management issues will cultivate a tension between rent and buy models for digital content distribution on consumer electronic devices; Leibovitch believes the rental model is more sensible as far as video content is concerned, given the fact that such content is largely disposable.
    Click Here to View Full Article

  • "Baby, You Can Drive My Song"
    USC Viterbi School of Engineering (05/30/05)

    The Expression Synthesis Project (ESP) headed by USC Viterbi School of Engineering professor Elaine Chew is designed to impart the experience of performing music to non-musicians, using an interface modeled after the controls of an automobile. A musical score in the Musical Instrument Digital Interface (MIDI) format is mapped out by the ESP system into a "road" corresponding to the score's structure, which enables important cues missing from the MIDI file to be captured. The turns in the road suggest to the user or "driver" when to slow down or speed up, although the actions the user takes are ultimately at his or her discretion. The music's tempo and volume are controlled by the foot pedals, while buttons on the steering wheel control the length of notes. A key enabling technology in the system's design is the Software Framework Architecture for Immersipresence and Modular Flow Scheduling Middleware created by Viterbi School research professor Alexandre Francois. Chew, an accomplished pianist and recipient of a National Science Foundation Early Career Award, and her team are working on tools to automate the creation of musical roads through the application of artificial intelligence methods to the analysis of the score.
    Click Here to View Full Article

  • "'Silent Horizon' War Games Wrap Up for the CIA"
    Associated Press (05/26/05); Bridis, Ted

    The CIA's Information Operations Center is conducting a three-day exercise dubbed "Silent Horizon" that simulates a prolonged cyberterrorist attack that could potentially cause as much damage and disruption as the Sept. 11, 2001, attacks, say exercise participants who want to remain anonymous. Although the government seems more concerned about biological attacks and physical threats from terrorists, FBI director Robert Mueller admits terrorists are actively recruiting computer scientists. Mueller says terrorists currently lack the resources for such a large-scale electronic attack on the United States. A previous cyberterrorism exercise, known as Livewire, determined government agencies may remain unaware of early-stage cyberterrorist attacks without the support of private technology companies. Dennis McGrath, who helped coordinate similar exercises for Dartmouth College's Institute for Security Technology Studies, says, "You hear less and less about the digital Pearl Harbor...It's just not at the top of the list." About 75 people took part in Silent Horizon at the secretive Information Operations Center, which studies cyber threats to the U.S.'s computer networks.
    Click Here to View Full Article

  • "'Skin' Could Refine Robots' Sense of Touch"
    EE Times (05/30/05) No. 1373, P. 38; Johnson, R. Colin

    University of Illinois (Urbana-Champaign) electrical engineers claim to have taken a significant step toward enhancing robots' tactile sense with the development of a prototype "skin" composed of a flexible polymer with multiple sensors that concurrently measure surface roughness, hardness, thermal conductivity, temperature, and contact force. The few existing robots with a sense of touch usually possess just one strain gauge, which means they cannot assess how hard an object is or the amount of pressure they are applying to it; researchers want to make robots capable of sensing the material an object is made of so that they can adjust their grip appropriately. Professor Chang Liu says the sensor arrays, which were fabricated via photolithography, can tactilely distinguish between objects made of metal, wood, plastic, and other materials. The researchers developed software algorithms to ascertain an object's characteristics through sensor input, and then apply the proper degree of force. Liu says thermal conductivity is evaluated by placing a miniature heater near a temperature sensor to measure an object's heat-transfer properties, thus yielding clues as to whether the object is wooden or metallic. Hardness is read via a dual-membrane sensor that gauges differential displacement. "Our approach is to integrate the readings from many different sensors in the skin of the robot's hand so that they apply enough force to keep it from slipping, but without so much force that it breaks," says Liu. The engineers intend to build a robot skin out of a polymer fabric with many sensor arrays distributed within it.
    Click Here to View Full Article

  • "Wagering on WiMax"
    eWeek (05/30/05) Vol. 22, No. 22, P. 22; Nobel, Carmen

    WiMax has been hyped as a technology that will offer a last-mile replacement for a land-line Internet connection as well as beefed-up Wi-Fi. WiMax's roadmap starts out with wireless, fixed last-mile connectivity, and eventually adds mobile broadband connectivity. Among the technology's promises are a range of several miles between client and base station, a median data throughput of 40 Mbps, and operation within licensed spectrum bands. Yet even WiMax supporters are now admitting the technology is not yet ready to accommodate the United States' fixed-wireless requirements; meanwhile, a mobile WiMax standard is nonexistent, and is likely to go up against competing 3G cellular technologies when it finally emerges. A fixed WiMax standard was approved by the IEEE last June and the WiMax Forum plans to start certification tests for fixed-wireless hardware next month, but so far attempts by carriers and equipment providers to implement fixed-wireless connectivity have come to naught. The emergence of a standard, coupled with aggressive publicity by Intel, could be the jolt WiMax needs; however, certified WiMax products slated to debut in the fourth quarter will operate in the 3.5 GHz band, which is chiefly used outside the United States. Siemens Communications CEO Andy Mattes reports that WiMax demands are heaviest among wire-line providers that wish to deliver broadband access to rural regions in underdeveloped countries. An IEEE-ratified standard for mobile WiMax is expected before 2006, but cost issues are making potential WiMax equipment providers careful, and skeptical that concepts such as personal broadband will be achievable in the short term.
    Click Here to View Full Article

  • "Evolving the Java Platform"
    Software Development Times (05/15/05) No. 126, P. 33; Hamilton, Graham

    Sun fellow Graham Hamilton writes that a key theme of the next Java 2 Standard Edition (J2SE) and Java 2 Enterprise Edition (J2EE) iterations is "ease-of-development," the need to maintain a balance between power, richness, and simplicity to ensure that the new Java specs are easy to use. J2SE 5.0, code-named Tiger, features a mechanism within the Java language that lets developers particularize desired behavior by tagging source code with annotations. Currently under development is J2SE 6.0 (Mustang), and J2SE 7.0 (Dolphin): Mustang will include a full-scale scripting engine, among other features, while the Dolphin release is expected to include direct XML support and a new Java Virtual Machine instruction that aims for Groovy, Python, and similar "dynamic languages." J2EE 5.0, meanwhile, will feature significant changes to the transactional data access layer in Enterprise JavaBeans (EJB) 3.0. These changes involve substantial simplification of persistence mapping between relational database tables and in-memory Java objects as well as the rules for defining an object as a transactional EJB. The Java platform will support Web services and a service-oriented architecture for distributed systems through the JAX-RPC standard, which is being significantly streamlined in J2EE 5.0 using Java language annotations to specify the definition and use of Web services. Such Web services are envisioned to support interoperability between Java and .NET.
    Click Here to View Full Article

  • "Integrating Geography and Real-Time Sensor Data"
    GeoWorld (05/05) Vol. 18, No. 5, P. 22; Lake, Ron

    Galdos Systems President Ron Lake writes that geography markup language (GML) has applications outside of vector geography and images, and is being implemented for real-time sensor data; he illustrates the viability of integrating geography and sensor data on a "plug and play" basis through the use of GML and Web Feature Servers (WFS). Lake notes that observation features in GML "model the act of observing" through the provision of several elements, including the location, time, subject, and result of the observation, as well as the instruments employed to carry it out. Observations can be expected to show up in WFS transactions and requests, which means that a WFS network can easily support a sensor web of interconnected WFSes. GML observations supply an architectural model, while the data produced by the observations is carried by the "gml:resultOf" property. Lake demonstrates how this concept is employed within a real-world sensor application such as real-time traffic management, using a project supported by Canada's Ministry of Transport as an example. The system boasts three WFSes: The first WFS logs observation data and writes received observations "pushed" by a sensor gateway, while the second WFS stores a real-time traffic model with GML dynamic features; the first WFS can prompt a policy that "fires" on specific observations and produces an update transaction against the second WFS. This traffic model carries dynamic data about the density and average speed of vehicles in each traffic link in the highway system, while the more static data about road geography is contained in the third WFS. The system also has a feature portrayal service that dynamically requests information from the WFSes and furnishes a color-coded traffic congestion map.
    Click Here to View Full Article

  • "Web Future Is Not Semantic, or Overly Orderly"
    CIO Insight (05/05) Vol. 1, No. 53, P. 25; Nee, Eric

    Eric Nee disputes World Wide Web Consortium (W3C) director Tim Berners-Lee's vision of a Semantic Web in which computers can comprehend the meaning of information through the encoding of metadata within every piece of online content. He also dismisses the concept of an intelligent Semantic Web agent that can carry out complex tasks with the capability of a human being as impossible. Berners-Lee's strategy for imbuing meaning in Web content is to develop new standards for posting such content, but the W3C's Resource Description Framework (RDF), RDF Schema, and Web Ontology Language (OWL) standards have not been universally welcomed. OWL and RDF have a steep learning curve, take a long time to use, and are unworkable, according to experts such as XML developer and Sun Microsystems technology director Tim Bray. In contrast to Berners-Lee's approach, Google creators Sergey Brin and Larry Page developed software that infers the meaning of content using different indicators, thus avoiding the need to change processes for posting content. "I'd rather make progress by having computers understand what humans write, than by forcing humans to write in ways that computers can understand," Brin told InfoWorld's 2002 CTO Forum. Challenges the Google approach fails to address can be tackled by other technologies--RSS and XML, for instance--that are simpler to use than Semantic Web standards. Nee predicts that the Internet will evolve not into Berners-Lee's highly ordered Semantic Web, but into "a patchwork quilt of homegrown solutions and standards."
    Click Here to View Full Article


 
    [ Archives ]  [ Home ]

 
HOME || ABOUT ACM || MEMBERSHIP || PUBLICATIONS || SPECIAL INTEREST GROUPS (SIGs) || EDUCATION || EVENTS & CONFERENCES || AWARDS || CHAPTERS || COMPUTING & PUBLIC POLICY || PRESSROOM