Association for Computing Machinery
Timely Topics for IT Professionals

About ACM TechNews

ACM TechNews is published every week on Monday, Wednesday, and Friday.


ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either AutoChoice Advisor or ACM. To send comments, please write to technews@hq.acm.org.
Volume 6, Issue 704: Friday, October 8, 2004

  • "Panel: Kerry Would Take New Approach to Tech Issues"
    IDG News Service (10/05/04); Gross, Grant

    Policy analysts discussed the different approaches to technology espoused by President Bush and Democratic challenger Sen. John Kerry (D.-Mass.), and what those philosophies would mean for broadband adoption, cybersecurity, and IT jobs. Kerry would get government more involved, offering tax breaks to companies in order to encourage broadband rollout, for example, while Bush would likely continue a market-driven approach that focuses on deregulation, according to the mostly partisan panelists. Kerry would fund government-sponsored broadband rollout by auctioning off unused broadcast spectrum--a move that has been opposed by television stations--but would not likely have the money invested in time, said Progress and Freedom Foundation research president Thomas Lenard, a Bush supporter. Progressive Policy Institute vice president Robert Atkinson accused President Bush of being AWOL on many technology issues, noting that Congress has been more proactive in dealing with technology issues. Former Department of Commerce assistant secretary on technology policy Bruce Mehlman said the Bush administration had a strong record of taking action on technology while Kerry had very little to do with technology issues during his 20-year Senate tenure. Under Bush, the federal government secured radio spectrum for 3G technology, approved ultrawideband rules, and supported a spectrum trust fund that would reimburse federal agencies for spectrum given over for commercial use. Kerry has also taken a tough approach on free trade that would hurt the technology industry, calling U.S. CEOs that outsource work overseas "Benedict Arnolds."
    Click Here to View Full Article

  • "Online Extra: Industry and Academia Weigh In (Extended)"
    Business Week (10/11/04); Mandel, Michael; Hamm, Steve

    The goal of the Council on Competitiveness' National Innovation Initiative is to outline an action plan to maintain the United States' vanguard position as an innovator in the global economy. Georgia Institute of Technology President G. Wayne Clough and IBM CEO Samuel Palmisano, who co-chair the initiative, warn that the United States could lose its competitive edge without such a strategy. Clough notes that the country's push for research and development has slackened since the Cold War ended, and this complacency could prove detrimental as global competition heats up with the addition of some 3 billion people to the worldwide economy. Palmisano foresees the economy being driven by cross-disciplinary innovations in fields such as bioinformatics, broadband infrastructure, and hydrogen fuel cells, resulting in about 100 million new jobs. Clough observes that the United States is losing ground to overseas competitors because of a decline in U.S. engineering graduates as well as fall-offs in foreign-born students enrolled in U.S. schools due to visa difficulties. Palmisano says the country's closed-mindedness is unhealthy, and the National Innovation Initiative seeks to promote the value of global interaction. Clough thinks the Energy and Defense departments need to invest more in basic research, while the National Science Foundation needs additional funding; he also believes a new National Defense Education Act would encourage American students to go to graduate school by offering fellowships and scholarships. Palmisano explains that strengthening the U.S. workforce requires everyone--the government, the private sector, and academia--to contribute, and part of this strategy involves getting younger people interested in technology earlier and attracting more diverse groups to technical disciplines.
    Click Here to View Full Article

  • "Experts Envision Taillights That Talk"
    CNet (10/05/04); Kanellos, Michael

    Trapped people calling for help by bringing their cell phones near a ceiling light to relay their location to emergency workers and cars exchanging information about road conditions via their headlights and taillights are just some of the applications envisioned by the Visible Light Communications Consortium's proposal to use light emitting diodes (LEDs) for high-speed data transmission. The concept is being publicly demonstrated for the first time at this week's CEATEC conference in Tokyo: One prototype system features headphones that play different musical tracks when lights beaming on the wearer change color. Keio University professor Masao Nakagawa, who first conceived of LED data transmission seven years ago, says the consortium is attempting to exploit the spread of LEDs and their capabilities. He predicts that LEDs will supplant most light bulbs and then fluorescent bulbs in the next 10 years or so. LEDs discharge light at a particular bandwidth, and engineers tap into the light with modulators that split the light into data that a computer can translate as 1s or 0s. The system is comprised of LED lights and receivers and small silicon chips that support two-way transmission between the LED and a bank of servers; an antecedent, the Photophone, was invented by Alexander Graham Bell over 100 years ago. Keio University information and computer science professor Shinichiro Haruyama lists several advantages light has over other forms of wireless communication, including better detection and prevention of eavesdropping and support for more data streams. LED systems could also be more protective of privacy, as their reliance on line of sight means that an action as simple as shutting a door or pocketing a cell phone can sever communications.
    Click Here to View Full Article

  • "Peer-to-Peer Comes Clean"
    Technology Review (10/06/04); Garfinkel, Simson

    Simson Garfinkel envisions many positive applications for peer-to-peer (P2P) technology, despite the bad rap it has received for underpinning notorious file-sharing networks such as Morpheus and Kazaa. Emerging P2P systems of interest include Penn State University's LionShare, a series of networks where academics can catalog and export their personal files, and thus exchange scholarly information amongst themselves. Another notable P2P system is a spam-filtering tool called Vipul's Razor that deploys a small software agent on every computer in the network; the agent senses the arrival of email, and determines it to be spam if the message shows up in multiple locations at roughly the same time. Garfinkel also takes note of BitTorrent, a P2P system designed to permit small software publishers to more widely distribute their programs by replicating popular downloads across scores of individually owned PCs rather than hosting the downloads on costly server farms. The author writes that academics have been researching P2P, with particular emphasis on the creation of databases shared between multiple computers throughout the Internet, also known as distributed hash tables (DHTs); the optimal configuration can automatically locate Net-linked computers that are DHT components, store data redundantly on numerous machines, shield information with encryption and digital signatures, and sustain participants' motivation and honesty by supporting distributed reputation, trust, and payment systems. Examples of real-world P2P applications academics are pursuing include papers presented at the IEEE's Fourth International Conference on Peer-to-Peer Computing, which covered projects that aim to increase the scale of unstructured P2P networks and build P2P networks in response to shifts in the foundational Internet.
    Click Here to View Full Article

  • "Connecting Paper and Online Worlds by Cellphone Camera"
    New York Times (10/07/04) P. E5; Heingartner, Douglas

    Researchers at an Intel-funded lab at England's Cambridge University and elsewhere aim to turn camera-equipped cell phones into multifunctional handhelds that gather information in public settings, relieving users from the burden of working with impersonal, sometimes faulty public kiosks. The Cambridge scientists have developed a cell phone whose camera scans symbols called SpotCodes, which launch given services when clicked; SpotCodes can be placed on nearly any surface. Software turns the camera lens into a scanner that reads SpotCodes or other recognizable tags, and the information encoded within the tags appears on the phone's display. A prototype service from the SpotCode team allows users to display flight information on their phones by pointing and clicking on an overhead plasma screen. The practicality of such services has only recently emerged, concurrent with the rapid proliferation of camera-equipped cell phones. Asia has become a major market for such applications in the last two years: In Japan, for instance, users of camera-equipped cell phones can locate nearby ATMs, assess the freshness of food in local marketplaces, or get additional information about museum exhibits by scanning markers with the devices. The fact that the scanning software is installed on most new phones by major Asian mobile operators is key to the technology's success. Companies such as NeoMedia Technologies and Scanbuy are launching similar applications in North America, primarily for retail.
    Click Here to View Full Article
    (Access to this site is free; however, first-time visitors must register.)

  • "Page Layout Drives Web Search"
    Technology Research News (10/13/04); Patch, Kimberly

    Deng Cai of the University of Illinois at Urbana-Champaign notes that search engines could become more accurate if they performed block-level rather than page-level analysis of the Web. This technique would eliminate errors resulting from following two assumptions that often prove false: That links on Web pages communicate human endorsement, and that connected pages probably contain related subject matter. Cai and fellow researchers with Microsoft Research Asia and China's Tsinghua University developed a prototype system that uses a Vision-Based Page Segmentation algorithm to demarcate the various components of a Web page according to how a person sees the page. The algorithm sections the page using horizontal and vertical lines, and assesses blocks of content according to their position on the page. By extracting page-to-block and block-to-page relationships (the former determined by page layout analysis and the latter determined by the likelihood of a block connecting to a given page), the prototype can rank Web pages and build page and block graphs; this info is then fed to link-analysis algorithms that evaluate each page's importance according to the type of blocks that link into it. Cai explains that these algorithms can model the Web's intrinsic semantic architecture from this data; this process bears a certain resemblance to the World Wide Web Consortium's Semantic Web project, although the widespread adoption of tags and other software is not required to facilitate Web page parsing. Cai says a practical search system would not be taxed by a block-level PageRank function since it can be calculated offline. The prototype's developers detailed their work in late July at the Association for Computing Machinery Special Interest Group Information Retrieval (SIGIR) 2004 conference.
    Click Here to View Full Article

  • "Adaptive Refinement Promises Faster Rendering on Graphics Chips"
    PhysOrg.com (10/04/04)

    University of California, San Diego computer scientists have found a way to inexpensively speed up the rendering of realistic, computer-generated scenes by several orders of magnitude by applying adaptive refinement to graphics processing units (GPUs) for the first time. Jacobs School of Engineering Ph.D. candidate Craig Donner and computer science and engineering professor Henrik Wann Jensen developed a method to render scenes faster by tiling the screen and winnowing out tiles that are no longer active--a technique that Jensen says exploits "computational similarity, coherence and locality." Determination of inactive tiles is accomplished via "occlusion queries" enabled by a feature embedded in most recent graphics processors. The usual approach to GPU computation is to rasterize a piece of geometry onto a screen so that it encompasses the entire display area, and then concentrate on each individual pixel. Donner and Jensen tested the adaptive refinement technique on photon mapping (modeling a scene's indirect illumination through multiple passes) and the Mandelbrot Set graphics algorithm, and uncovered significant performance improvements: The Mandelbrot Set test sped up the rendering interval more than three times, while the photon mapping test yielded a nearly 300-fold increase, leading Donner to conclude that "the more complex your calculation, the more you gain with this method." The researchers learned that the optimal tile width is between eight and 32 pixels on current equipment, while subdividing yields the best results when between 40 percent and 60 percent of the fragments are active. The method was detailed at the SIGGRAPH conference in August.
    Click Here to View Full Article

  • "Data Visualization: Virtual Geographic Environments Combining AES and GIS"
    Directions (10/05/04); Lin, Hui; Qing, Zhu

    Virtual geographic environments (VGEs) effectively combine the data of geographic information systems (GIS) and architecture/engineering/construction (AES) fields. By presenting data in a 3D format with audio, haptic, and other multisensory augmentation, VGEs are able to furnish the user with a fuller picture of the complete data set. Since the days of cartographic maps, visualization has long been a better way to present data so that people can easily understand it and grasp underlying meanings; with computer technology, interactive features have enhanced the ability of visual maps to convey information. VGEs remove some of the final information encoding/decoding processes so that users can explore and interact with a virtual representation of the natural world--however, VGEs are different from the unrealistic expectations usually associated with the virtual reality space because they focus on representing the network, geographic data, multidimensional presentation, sensory/perceptual, and social spaces in a continuous manner. This combination of abstract phenomena allows users to support exploration, prediction, and planning in regards to spatial decision-making, and can help uncover unusual knowledge. And while VGEs provide the important visual representation of these combined spaces, the addition of other data representations, such as via audio, allow such systems to be more easily used on mobile devices. Human-computer communication should take advantage of the complete range of sensory channels in order to utilize information "bandwidth" between the user and system. This goal is accomplished for spatial geographic information when GIS and AES are merged in VGEs.
    Click Here to View Full Article

  • "RFID: Getting From Mandates to a Wireless Internet of Artifacts"
    Computerworld (10/04/04); Gadh, Rajit

    Supplier mandates from Wal-Mart, the Department of Defense, and others have made radio frequency identification (RFID) technology a priority project for many companies, but the tight timetable for deployment has led many vendor firms to adopt a "slap-and-ship" model that does not bring any internal efficiencies, writes Rajit Gadh, director of the University of California, Los Angeles, (UCLA) Wireless Internet for Mobile Enterprise Consortium (WINMEC). In order to reap the full benefits of RFID, manufacturers and other partners in the supply chain need to carefully plan for a more widespread deployment that will eventually enable a wireless "Internet of Artifacts," wherein items intelligently communicate with each other in order to automatically effect business benefits. To begin with, RFID hardware needs to be made more robust so it can work in sub-optimal circumstances, and the production environment must be studied in order to plan for RFID, possibly with the use of simulation software. A well-planned, scalable, robust, and efficient wireless architecture provides the basis for the Internet of Artifacts, where objects announce their own arrival and can be tracked; ad hoc networks can be easily set up and dismantled. Reaching this goal on the one hand requires significant research into device-level communication, data exchange at the middle layer, and business logic at the upper layer. On the other hand, RFID technology has already received a huge boost from large supplier mandates. At the upcoming RFID Forum this month, WINMEC will present plans for industry-partnered pilot programs to be launched this year. Over the next few years, WINMEC will work to bring together user companies and technology vendors to collaborate on interfaces, middleware, and protocols to be used for the Internet of Artifacts.
    Click Here to View Full Article

  • "Access to All Europe's Websites"
    IST Results (10/07/04)

    A trio of European projects have made important contributions to the adoption of the international Web Accessibility Initiative (WAI) guidelines by public Web sites. Under the auspices of the World Wide Web Consortium (W3C), the WAI effort calls for sites to be universally accessible from any device by anyone: Achieving this goal requires sites to cleanly implement HTML for structural control and stylesheets for the presentation within HTML. Improving Web site accessibility for the disabled was the main agenda of WAI-DE, the first EU-funded project; its successor, the Design for All or WAI-DA project, concentrated on boosting Web accessibility in EU-member states in addition to upholding and supplementing the W3C's WAI effort; the third initiative, the IST program-coordinated WAI-TIES project, is following a similar agenda while also stressing Europe-specific training, education, and deployment support activities. The WAI has devised three complementary accessibility guidelines--for Web content, user agents, and authoring tools--that were introduced to many European site developers through WAI-DA, notes W3C European Web-accessibility specialist Shadi Abou-Zahra. Anyone can increase site accessibility and comply with the WAI recommendations with commercial authoring software, specifically tools that accommodate W3C/WAI's Authoring Tool Guidelines, which offer more accessibility for handicapped Web authors. Retrofitting Web sites is a goal of WAI-DA, which possesses a growing database of tools for assessing current sites and also promotes best practices. WAI-TIES, meanwhile, is coordinating its work with global standardization organizations so that Web accessibility requirements in Europe are harmonized.
    Click Here to View Full Article

  • "Where's the Simplicity in Web Services?"
    CNet (10/05/04); LaMonica, Martin

    A small group of programmers says Web services specifications are growing out of control and overcomplicating development, but other experts note that the added overhead is worthwhile, considering the interoperability challenges many companies face. Most recently, Sun Microsystems Web technologies director and XML co-inventor Tim Bray said Web services standards had become "bloated, opaque, and insanely complex." There are more than 30 specifications enabling security, reliability, and other features built atop basic Web services protocols such as SOAP (Simple Object Access Protocol) and WSDL (Web Services Description Language); these extensions make Web services more flexible and able to tackle complex enterprise computing challenges. Although Bray and a number of other programmers have complained that these new specifications lay too much of a burden on programmers, the specifications are proliferating, including the WS-Transfer, WS-Enumeration, and WS-MetaData-Exchange protocols issued several weeks ago. Independent software consultant Mike Gunderloy says the added technologies detract from the simplicity Web services were supposed to provide, and that the Representational State Transfer (REST) is more efficient in terms of performance and development; REST uses existing Internet protocols, especially HTTP, to send XML documents and enable applications. But analyst Ron Schmelzer says this approach, while more efficient in some cases, undermines the basic interoperability promise of Web services. About Web services, he says, "It's not meant to be optimal--it's what companies can agree to do together, given that they have very different products." Other Web services defenders note development tools mask much of the specification complexity from developers, anyway.
    Click Here to View Full Article

  • "Device Translates Spoken Japanese and English"
    New Scientist (10/07/04); Knight, Will

    A handheld device that facilitates Japanese-to-English and English-to-Japanese translation of spoken language is expected to make its Japanese debut in the coming months. The NEC gadget uses a speech recognition engine to identify spoken English or Japanese and convert it into text, software to translate the language into its Japanese or English equivalent, and a voice synthesizer to generate the translation vocally--all in approximately one second. Carnegie Mellon University machine translation expert Alan Rudnicky believes the NEC system will only be applicable under certain circumstances, and should not be viewed as a replacement for normal verbal communication. "It's better to think of these as communication devices that enable two people to communicate across a language barrier," he argues. The product will initially be marketed for Japanese tourists and business travelers, although NEC researcher Akitoshi Okumura thinks the system can be adjusted to translate other languages. Training the system to recognize native speakers is essential to its performance, and Okumura says this process requires the sampling of roughly 100 unique voices. The researcher thinks a workable translator for mobile phones is a possibility, although such a breakthrough requires functionality upgrades that include the ability to distinguish a voice from background noise and recognize different accents.
    Click Here to View Full Article

  • "Click to Touch"
    e4Engineering (10/04/04)

    Nicola Davison is commercializing Click 2 Touch, a software program developed at Nottingham Trent University that allows people to virtually examine the quality of various fabrics. She believes such a tool will help dramatically reduce the volume of returned items purchased online, and encourage more shoppers to buy clothing over the Internet. Click 2 Touch, which Davison is marketing under a company of the same name, supplies realistic "sensations" for the feelings of softness, fullness, drape, smoothness, hairiness, prickliness, elasticity, rigidity, thickness, and warmth as users drag a mouse over virtual 3D animations of different fabrics. For instance, users can raise the edge of a garment and drop it back down using the thickness sensation, or stretch the garment via the elasticity application. Users can also spin the images and zoom in on details with the mouse. "The program is also designed to make shopping online fun and rewarding rather than just clicking through pages," notes Davison, who intends to apply the software to home furnishings in the future. The Department of Trade and Industry recently awarded Davison's company a grant to further develop Click 2 Touch. Almost 40 percent of goods bought online currently end up being returned.
    Click Here to View Full Article

  • "The Need to Keep Congress Fully Informed"
    CircleID (10/04/04); Levinson, Bruce

    ICANN is not living up to the requirements of its MOU with the U.S. Department of Commerce, according to one Internet governance expert. ICANN's MOU agreement with the Commerce Department requires that the Internet body reach certain goals by specific dates; among the requirements set forth by the MOU is that ICANN define "a predictable strategy for selecting new TLDs using straightforward, transparent, and objective procedures that preserve the stability of the Internet." The MOU states that the development of this strategy must be complete by Sept. 30, 2004, and that the strategy be implemented by Dec. 31, 2004. As called for by the MOU, ICANN did indeed submit a "strategy document" by the Sept. 30 deadline, but Internet governance expert Christopher Ambler has severely criticized the document as a rushed, insufficient attempt to meet a deadline. Ambler's background includes time served at several groups involved with Internet governance, including ICANN, ICANN's DNSO, the Internet Engineering Task Force, the International Forum on the White Paper, and the Open Root Server Confederation. Ambler says the poor quality of ICANN's document suggests that the group lacks a strategy for selecting new TLDs and that it will be nearly impossible for ICANN to implement any such strategy by Dec. 31. "The Department of Commerce should reject this document as completely insufficient," says Ambler, adding that the situation is akin to "having a term paper due, and submitting just the bibliography." In a curious move, an NTIA official testified before the U.S. Senate that ICANN is expected to "complete and implement a predictable strategy for selecting new TLDs by Dec. 31, 2004," while omitting the fact that the strategy itself was to have been completed by Sept. 30.
    Click Here to View Full Article

  • "Security Flaws in Popular Chess Web Site Found by University of Colorado Team"
    University of Colorado at Boulder News Center (10/04/04)

    University of Colorado at Boulder students hacked the 30,000-plus-member Internet Chess Club as part of research funded by the National Science Foundation. With guidance from University of Colorado at Boulder computer security researcher John Black, two students reverse-engineered the service and built a timekeeping mechanism that would allow them to adjust their time allotments against opponents, and a sniffer program that could steal passwords and credit card numbers passing between users and the server. Black, an assistant professor of computer science in CU-Boulder's College of Engineering and Applied Science, says the team will not release the code they used to hack the service, and offered the Internet Chess Club simple suggestions on how to close their security holes. Teaching security through hacking is a controversial practice, but good defenders need to understand how their opponents operate, says Black. The Internet Chess Club started as a small operation, but as the business scaled in size, security enhancements were neglected. The most important lesson drawn from the exercise is that security is best left to experts, says Black, who teaches a course on computer and network security and received a CAREER award from the National Science Foundation in 2002. Black's paper, "How to Cheat at Chess: A Security Analysis of the Internet Chess Club," is available at the International Association for Cryptologic Research's Cryptology ePrint Archive, at:
    Click Here to View Full Article

  • "Center for the Digital Future Identifies the 10 Major Trends Emerging in the Internet's First Decade of Public Use"
    USC Annenberg Center for the Digital Future (09/23/04)

    The USC Annenberg School Center for the Digital Future recognizes 10 key trends demonstrating the Internet's impact on America during its first 10 years of public use. The report for Year Four of the Digital Future Project indicates that both Internet access and the number of hours spent online have hit record highs while continuing to increase; online information's initially high credibility level has been dipping steadily since year three of the study; users who believe that only about half the information on the Internet is reliable and accurate comprise more than 40 percent of the user population for the first time; credibility of information for users is highest on regularly visited Web sites and pages created by established media and the government, and lowest on pages posted by individuals; and Internet users are watching television less and less. Another identified trend is a narrowing of America's digital divide in terms of Internet access, although new divides in broadband access and home Internet access are emerging. The evolution of online retail has only just begun, as evidenced by more frequent online purchases by Internet users and a reduction in the percentage of users concerned about information privacy. Although email remains the single most popular activity for Internet users, there are indications that more experienced users answer email less often. Other noted trends include: A lack of definition over the issue of the Internet's advantages and disadvantages for children; the Internet serving as the leading source of information for very experienced users; growing concern, among both users and non-users, about personal privacy and security in every facet of online use; and the dissolution of the perception that Internet use is a socially isolating activity, as demonstrated by continued evidence that Internet users are more socially active and less alienated than non-users. Finally, always-on broadband Internet access is expected to change Internet use to the degree that a lack of broadband will be perceived as tantamount to a lack of Internet access, period.
    Click Here to View Full Article

  • "Mission: Critical"
    Information Security (09/04) Vol. 7, No. 9, P. 26; Barlas, Stephen; Earls, Alan; Fitzgerald, Michael

    An Information Security survey of professionals in the financial, energy, transportation, telecom, and government sectors highlights the vulnerability of the U.S. critical infrastructure to online attack: Fifty-one percent of financial services professionals say their industry is not prepared for cyberattacks, a sentiment echoed by 57 percent of energy industry respondents, 65 percent of transportation industry respondents, 60 percent of telecom workers, and 62 percent of federal IT/security personnel. Still, most respondents agree that their sector is better prepared for cyberattacks than they were before Sept. 11, 2001. The cyberterrorist threat has spurred workforce, infrastructure, and data redistribution, as well as the erection of flexible backup centers and lines of communication, among financial institutions; sector-wide collaboration to understand and protect against individual and collective threats is being facilitated by data exchange channels such as the Financial Services Information Sharing and Analysis Center. The energy sector's cyber-vulnerability is growing as the system control and data acquisition (SCADA) systems that direct the majority of energy automation link to the Internet, and the industry response's has been to build security standards and information sharing, while the departments of Homeland Security (DHS) and Energy are studying and lowering risks through a National SCADA Testbed. Each sub-sector of the transportation industry is exploring and implementing cybersecurity strategies, with air transportation being scrutinized the most because of privacy issues related to the personal data airlines are collecting. Telecom experts are more fearful of the damage potential of a multi-pronged assault rather than a single attack, but few think such a siege would cripple the United States. Especially frustrating are the poor marks the DHS has been receiving from security experts, although the government security improvement budget will increase while administrative bodies such as the DHS' National Cybersecurity Division will continue to disseminate security info to both federal and private entities, stage incident response exercises, and build more secure government networks.
    Click Here to View Full Article

  • "The Time Is Now"
    GeoWorld (09/04) Vol. 17, No. 9, P. 26; Kucera, Gail; Henry, Kucera

    More timely responses to events as varied as market fluctuations, disease outbreaks, environmental changes, natural catastrophes, and security threats can be effected through the proper incorporation of spatial and temporal change information into models. A study sponsored by the National Technology Agency seeks to address the problems of practical, commercial time-varying geospatial information tracking by identifying a set of core requirements and development opportunities using a value-chain scheme that classifies five activities as communities of interest: Instrument and component builders who develop data collection technology via the creation of sensors, instruments, platforms, and interfaces; data collectors who deploy sensors and instruments to collate data; data refiners who use quality-assurance processes to transform raw data into certified data products; information integrators that combine data elements from diverse sources to build specialized or tailored information sets for specific audiences or goals; and decision supporters who analyze the information in keeping with specific objectives and present the results to inform decision makers. Users' concerns mapped out on the value chain skew to the right side of the chain, where better decisions are facilitated through the generation of information commodities via value-added processes. Functional needs required to satisfy users' concerns skew to the left, highlighting the problems in generating a dependable data stream to present time-varying geospatial information. The existence of development opportunities is predicated on the unavailability of good tech options or the potential of emerging tech to solve problems. Technologies identified as development opportunities for geospatial change over time include sensor webs; semantic exploitation; conflation or fusion of time-varying data; spatio-temporal pattern analysis; dynamic decision aids; and geographic profiling. Standards still fail to accommodate a number of schema guidelines for time-varying geospatial change tracking applications.
    Click Here to View Full Article

  • "Internet Measurement"
    Internet Computing (10/04) Vol. 8, No. 5, P. 30; Brownlee, Nevil; claffy, kc

    Key to sustaining the health of the Internet is network measurement, the process of understanding its growth mechanisms and the factors that can inhibit this growth on both a global and local level, write Nevil Brownlee of the University of Auckland and kc claffy of the Cooperative Association for Internet Data Analysis. The authors cite two critical developments that brought the value of network measurement out of the shadows: The cessation of funding for the U.S. Internet backbone by the National Science Foundation and the subsequent Internet community explosion spurred by the development and release of the fundamental protocols and software underpinning the Web, along with a more user-friendly graphical interface. A dearth of resources made network measurement a difficult proposition, but the bursting of the dot-com bubble and a slowdown in Internet growth fueled interest among some providers in understanding network behavior in order to leverage physical resources to their fullest. The growing threat of spam, malware, and other kinds of infrastructural pollution has further stimulated interest in network measurement. Brownlee and claffy take note of major challenges to network measurement and Internet modeling, as outlined by S. Floyd and V. Paxson: The Net is in a constant state of flux, its global scale makes measurement difficult, and fine-grained measurement is inherently supported by very few Internet protocols and applications. In addition, no entities or organizations have assumed or been handed the responsibility for developing measurement technologies and examining the data they would capture. The Internet's structure and performance has been widely misread for many years because information of network topology and function has been in relatively short supply. The authors maintain that "a pervasive community commitment" to network measurement is vital to the continued viability of the Internet.
    Click Here to View Full Article