ACM TechNews sponsored by Learn more about Texis, the text-oriented database providing high-performance search engine features combined with SQL operations and a development toolkit, that powers many diverse applications, including Webinator and the Thunderstone Search Appliance.   Learn more about Texis, the text-oriented database providing high-performance search engine features combined with SQL operations and a development toolkit, that powers many diverse applications, including Webinator and the Thunderstone Search Appliance.
ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either Thunderstone or ACM. To send comments, please write to [email protected].
Volume 7, Issue 767: Friday, March 18, 2005

  • "As File Sharing Nears High Court, Net Specialists Worry"
    New York Times (03/17/05) P. C5; Markoff, John

    Technologists attending this week's Emerging Technologies Conference in San Diego warned that the Supreme Court's decision in the case of MGM v. Grokster could have serious ramifications for innovative Internet-based services if the court rules in favor of the complainant and makes technology creators legally responsible for abuses committed by users. The music and movie industries are specifically targeting the Grokster and Streamcast online file-sharing services in their lawsuit, but tech advocates said new services where the sharing of copyrighted music and films is irrelevant could also be endangered by a court decision that establishes liability. A deciding factor in the case could be the entertainment industry's contention that consumer electronics designers can now develop technology capable of distinguishing between lawful and unlawful file copying. Technologists are concerned that the court's acceptance of this argument could allow Hollywood to control digital technology's technical specifications to such a degree that future innovation would be stifled. Services exhibited at the conference that could be affected by such a decision include new A9 search engine features from Amazon.com that allow Web users to easily share customized searches; Flickr, which permits the sharing and cataloging of digital photos by bloggers and Web surfers; iFabricate's service for sharing instructions for do-it-yourself construction projects among home inventors; and the volunteer-directed Wikipedia online encyclopedia.
    Click Here to View Full Article
    (Articles published within 7 days can be accessed free of charge on this site. After 7 days, a pay-per-article option is available. First-time visitors will need to register.)

  • "Web Tools Blaze Trail to the Past"
    CNet (03/17/05); Festa, Paul

    Google's use of old-school Dynamic HTML technologies when building its Gmail and Google Maps applications has spurred discussion among Web developers about whether new Web application technologies are needed. Google Maps and Gmail functions and design put them in a different product category from competing services, says Laszlo Systems CTO David Temkin, whose company's Web application system provides the framework for Earthlink's new Web mail. The developer community has been in a furor in recent weeks, debating the value of older, established Web technologies such as JavaScript and Cascading Style Sheets. Adaptive Path co-founder Jesse Garrett coined a new acronym for the renewed use of older Web application technologies in his blog; the new term, Asynchronous JavaScript + XML, or AJAX, promotes the advanced use of JavaScript. These older technologies have a huge developer base and support is already embedded in Web browsers, but require masterful skills to build applications such as Gmail and Google Maps. Google hired former Microsoft DHTML inventor Adam Bosworth to work on the projects, says Macromedia's David Mendels. JavaScript inventor Brendan Eich is representing a breakaway group at the World Wide Web Consortium that says proposed XForms are unnecessary; the Web Hypertext Application Technology Working Group is comprised mostly of browser vendors that plan to write a specification for using existing Web languages to build applications. Google says it simply chose the best option for its new applications and does not take an ideological position on the issue. Meanwhile, other innovative Web sites such as the Flickr photo-sharing service use both JavaScript and Flash on the same Web page.
    Click Here to View Full Article

  • "Irish Open-Source Groups Protest Software Patents"
    eWeek (03/17/05); Broersma, Matthew

    Open-source advocates throughout the European Union (EU) are concerned that a directive officially endorsed by the EU Council will legitimize "pure" software patents and jeopardize European contributions to open-source projects. Irish open-source groups fired off a briefing document to Ireland's Members of European Parliament (MEPs) as an answer to members' own inquiries about software patents, reports open-source activist Barry O'Donovan. O'Donovan set up a form on KDE.ie as a way for constituents to get in touch with Irish MEPs to air their concerns about software patents, and he estimates that some 400 emails were sent; some MEPs responded to the missives by requesting more information on the EU patent directive and its potential ramifications for Irish research and industry. MEPs have three months to either modify or discard the draft directive, which is currently going through a second reading in Parliament. Ireland's former Minister of Finance, Charlie McGreevy, is the European Commissioner overseeing the directive. The briefing document, which was sent out by KDE Ireland, the Irish Linux Users' Group, and the Irish Free Software Organization, lists 10 reasons why software should not be patentable, including the fact that its abstract nature would make it impossible to reliably avoid patent infringement. Also featured in the document are excerpts from the U.S. Federal Trade Commission's October 2003 "Report on Innovation," which warned that software patents are "impairing follow-on incentives, increasing entry barriers, creating uncertainty that harms incentives to invest in innovation, and producing patent thickets."
    Click Here to View Full Article

  • "IT Innovation From Both Sides of the Globe"
    Computer Weekly (03/15/05); Mohamed, Arif

    The U.K. and New Zealand ministries of trade helped young technology firms from their respective countries exhibit innovative technologies at the CeBIT show. The [email protected] program featured several software firms, some device manufacturers, and a Cambridge University display spinoff, while New Zealand's Trade and Enterprise department helped 3D software firms, GPS companies, and interface firms make it to the CeBIT show. Naviguide from the University of Hertfordshire offered a new Java script-based Web search enhancer that repackages existing content to provide more context to searches, while Speed Trap offered software that records all user activity on an e-commerce Web site, not just predefined actions, so that Web administrators can identify clunky design and other obstacles. DeadMan's Handle software deletes specified information on a laptop computer in case it is stolen, while Cambridge Flat Projection Displays touted optical fiber-based displays that register movement; the display technology could be used in virtual reality headsets or as low-cost projectors. Among the New Zealand firms was NextWindow, which showcased gesture recognition technology that can be used to control various computer and Web browser functions. Right Hemisphere brought a 3D authoring tool that was used in the design of Airbus' new A380 jumbo airliner and has been included in Adobe reader software for the viewing of 3D content. Terralink International also offered 3D software that integrates image, map, and spatial database tools to help create geographic information systems. The University of Canterbury's Human Interface Technology Laboratory also demonstrated several products that use 3D panoramic displays and virtual reality to enhance human-computer interfaces.
    Click Here to View Full Article

  • "The Giant Who Walks Amongst Us"
    Technology Review (03/17/05); Knudsen, Jenn Director

    Researchers are experimenting with and refining augmented reality (AR) systems to enhance the experience of reading a book through animated virtual characters overlaid on the actual pages. The technology's limitations include bulky head-mounted displays that the user must wear, and unattractive markers embedded into pages that interact with the headgear so that the virtual elements line up correctly. "The technology should be entirely transparent," argues Mitsubishi Electric Research Laboratories (MERL) scientist Ramesh Raskar. Such transparency can be achieved through spatial AR, in which virtual objects are projected onto the user's view. Interactive storybooks enabled for spatial AR would allow the reader to diverge from the story's plot through physical action, although Raskar notes that spatial AR in books has limited applications. Steven Feiner with Columbia University's Computer Graphics and User Interface Lab has produced AR-enhanced documentary films in partnership with Columbia's Graduate School of Journalism, while Raskar and MERL colleagues are graphically animating objects such as a virtual Taj Mahal with projectors and touch sensors. Another MERL technology, demonstrated at SIGGRAPH 2004, called RFIG Lamps, uses radio frequency identity and geometry (RFIG) transponders to create self-describing objects that are activated as users approach. These various technologies are very expensive, which impedes their application in book reading and other forms of widespread public use. However, future AR systems may take storytelling beyond physical books, an example being entire rooms, or "interactive narrative playspaces," where users can fully immerse themselves in self-directed adventures.
    Click Here to View Full Article

  • "Net Surfing for Those Unable to See"
    Baltimore Sun (03/16/05) P. 1C; Tucker, Abigail

    A collaborative venture between Towson University professor Jonathan Lazar and the National Federation of the Blind (NFB) is examining the many problems visually impaired people encounter when navigating the Internet. Lazar, who serves as director of the university's Computer Information Systems Undergraduate Program, notes that spam, security checks, pop-up ads, and other things that can slow down an unimpaired user's Web searches are even worse impediments for the blind. "What is annoying to a visual user becomes impossible for a blind user," he says. Screen readers or Braille keyboards that blind people use to navigate the Internet are limited in that they cannot scan or render graphical elements into a readable format. Lazar and Betsy Zaborowski with the NFB's research and technology training institute agree that the Internet is fundamentally designed for visual users. Lazar insists that the Net can be redesigned for the blind easily and cheaply, particularly if such accommodations are made in the earliest phase of Web site design; for instance, designers could add expository captions below pictures, or bypass redundant links with the addition of shortcuts. Lazar says the most important step in spurring reforms is raising awareness of the problem, which he intends to do when he releases the results of his study to Web masters and software designers in the summer. Zaborowski warns that without adequate Web accessibility, blind users will be unable to acquire Internet skills that could expand their job prospects.
    Click Here to View Full Article
    (Access to this site is free; however, first-time visitors must register.)

  • "Computer Study Powers Down"
    Daily Camera (03/17/05); Toland, Sarah

    The economic slump, offshore outsourcing, and cutbacks in corporate IT spending have combined to create the impression that a computer science background no longer guarantees job stability, and fewer university students are choosing to major in computer science as a result; the Computing Research Association says the rate of new computer science majors at U.S. universities has dropped 28 percent since 2000. University of Colorado computer science department chair Elizabeth Bradley contends that the computing job situation is not as dire as students think, citing Bureau of Labor Statistics projections that IT industry employment will increase at 3.1 percent annually between 2002 and 2012, while 140,000 new jobs in computer-related vocations will be created each year. However, declines in computer science enrollments fueled by uncertainty will lead to a significant shortage of qualified U.S. graduates concurrent with the IT industry rebound, according to Colorado State University academic advisor James Peterson. He notes that CSU has experienced a 50 percent cumulative drop in the number of new computer science majors over the last five years. There is a strong demand for domestic IT talent at Colorado companies such as Webroot Software, and Webroot CEO David Moll says many of the latest tech jobs cannot be exported overseas. "If we're growing technology talent and jobs here at a slower rate than the rest of the world, it doesn't bode well for our standing in the world economy," he warns. Bradley says her department is refining CU's computer science program to help cultivate more marketable skills such as teamwork and customer communication in students, since companies are not offshoring interdisciplinary work. "We're trying to change the way we teach computer science so that our graduates aren't just solitary programmers," she explains.
    Click Here to View Full Article

  • "Zooming in on Legibility"
    The Feature (03/15/05); Frauenfelder, Mark

    Reading Web pages on the small display screens of mobile devices is problematic, as the pages are presented as either illegible thumbnails or as difficult-to-navigate single columns. Patrick Baudisch, a human-computer interaction researcher with Microsoft Research's Visualization and Interaction Research Group, is working to overcome this limitation in several ways. "Summary thumbnails," which Baudisch developed with help from Heidi Lam at the University of British Columbia, is a mobile browser that converts Web pages into thumbnails while displaying readable text fragments to maintain legibility. A study conducted by Baudisch and Lam found that summary thumbnail users located the text they were after 41 percent faster and with 71 percent fewer errors than when they employed conventional thumbnail rendering browsers, and also zoomed in 59 percent less. Baudisch's other concept is "collapse-to-zoom," whereby users can draw lines across areas of a thumbnail display that they wish to enlarge or collapse with a stylus. Bausch devised the technique in conjunction with Microsoft Research Asia's Chong Wang and Xing Xie. Several researchers are investigating a third method, speed dependent automatic zooming (SDAZ), that lets mobile device users rapidly scroll through a large amount of content, which is automatically enlarged when scrolling slows down or halts. The concept is especially promising as a technique for helping people find the information they desire faster, while also avoiding motion sickness.
    Click Here to View Full Article

  • "Researchers: Metcalfe's Law Overshoots the Mark"
    CNet (03/14/05); Shankland, Stephen

    Metcalfe's Law, perceived as a key driver of the dot-com explosion with its assumption that a network's value increases with the square of the number of devices in the network, has been called into question by University of Minnesota researchers Andrew Odlyzko and Benjamin Tilly, who posit in a preliminary paper that the law is overly optimistic. "The fundamental fallacy underlying Metcalfe's (Law) is in the assumption that all connections or all groups are equally valuable," assert the researchers, who work in the university's Digital Technology Center. They cite one example in which a network's value increased only 5 percent, instead of 100 percent as predicted by Metcalfe's Law. Odlyzko and Tilly reason in their paper that the validity of Metcalfe's Law would have been proven by a significant acceleration of network mergers, given the enormous economic incentives the law promised; but in practice, such interconnection is painfully slow. The researchers suggest a "network effect law" stating that the value of a network with n members is n times the logarithm of n rather than n squared. This new rule of thumb demonstrates a rationale behind dominant networks' refusal to interconnect with smaller networks, as evidenced by non-interoperable email, telephone, and text messaging standards. Odlyzko and Tilly explain that a two-network merger yields significantly less benefits for the larger network than for the smaller one. "This produces an incentive for larger networks to refuse to interconnect without payment, a very common phenomenon in the real economy," they reason.
    Click Here to View Full Article

  • "Robots Serve Humans on Land, in Sea and Air"
    MIT News (03/02/05); Clark, Lauren J.

    MIT Computer Science and Artificial Intelligence Laboratory director Rodney Brooks says robotic navigation technology has advanced dramatically in the last two decades, a statement verified by the development of autonomous vehicles for domestic and military use by MIT and MIT spinoffs such as BlueFin Robotics and iRobot. Advances in airborne robot technology such as intelligent aircraft, communication among multiple air vehicles, and automated takeoff and landing are some of the challenges that aeronautics and astronautics professor Eric Feron is trying to tackle at MIT's Laboratory for Information and Decision Systems. Among his group's breakthroughs is the "robochopper," a remote-controlled model helicopter that can execute autonomous aerobatic maneuvers, and an intelligent aircraft guidance system that lets a pilot in one aircraft control the flight of an unmanned plane by voice command. Feron is also working on a "collaborative vision scheme" for autonomous landing, in which a helicopter's camera focuses on a landing area with a specially designed target that can help the vehicle acquire position parameters in real time. MIT's Autonomous Underwater Vehicle Laboratory AUV Lab developed the Odyssey class of AUVs. Next-generation AUVs that lab director Chryssostomos Chryssostomidis expects to see include vehicles that can be deployed from aircraft, that can mimic marine animals, and that can hover to check ship hulls for mines. Technical challenges that Chryssostomidis and colleagues are trying to meet include boosting AUVs' power efficiency, modem-based underwater acoustic communication, and software for controlling both communication and navigation.
    Click Here to View Full Article

  • "Big Screens to Come in Small Packages"
    New York Times (03/17/05) P. E8; Eisenberg, Anne

    Philips Polymer Vision plans to have its rollable electronic display technology in consumer devices within two years and has already produced a five-inch working prototype that is flexible enough to curl around a pencil; Polymer Vision is set to spin off from its parent, Philips Electronics, this year. The thin, flexible plastic screen will bring a number of paper-driven applications to people's mobile devices, which will be able to display vastly larger documents than with traditional LCD displays. Rollable displays will be pulled out from a device and used for black-and-white images, such as longer text messages than allowed on normal cell phones. The next generation of text messaging will involve people dictating messages that are then translated into text and sent; recipients will be able to more easily store text messages than voice, says Polymer Vision CEO Karl McGoldrick. Other likely applications include legal document displays and systems to organize personal information. The current prototype is 100 microns thick, about the same as a sheet of paper, and making the screen thinner increases flexibility. In tests, the display was rolled 20,000 times without harming image quality. The plastic display is actually two layers of plastic sandwiching electronics made from organic semiconductors, basically a plastic form of LCD; during manufacturing, the sheet is attached to a hard substrate so that conventional manufacturing equipment can be used. Electronic ink is another key technology, and involves microcapsules of black and white paint that respond to electrical charges. The display performs better than standard mobile device screens in bright daylight because it reflects about 40 percent of the light.
    Click Here to View Full Article
    (Articles published within 7 days can be accessed free of charge on this site. After 7 days, a pay-per-article option is available. First-time visitors will need to register.)

  • "World's Largest Computing Grid Surpasses 100 Sites"
    CERN (03/15/2005); Grey, Francois

    The Large Hadron Collider Computing Grid (LCG) project announced on March 15 that over 100 sites distributed throughout 31 countries now comprise its computing grid, making the LCG the largest international scientific grid on Earth. The participating sites--chiefly research laboratories and universities--are dedicating over 10,000 CPUs and almost 10 million GB of storage capacity. The four key experiments that will employ the particle accelerator--ALICE, LHCb, CMS, and ATLAS--are already testing the LCG project grid to model the computing conditions expected once the collider is fully up and running in two years' time. Speaking at a Global Grid Forum (GGF) meeting this week, GGF Chairman Mark Linesch hailed the LCG as an important milestone for both grids and science. "Without doubt the LCG project is pushing the envelope for what an international science grid can do," he declared. LCG project leader Les Robertson estimates that the grid's current processing capacity fulfills a mere 5 percent of the collider's long-term requirements, which means that the LCG must continue its rapid growth over the next several years by bringing in additional sites and making more resources available at existing sites. Exponential boosts in disk storage capacity and processor speed will also help the collider reach its lofty computing objectives.
    Click Here to View Full Article

  • "Seismic Shift"
    InformationWeek (03/14/05) No. 1030, P. 42; Ricadela, Aaron

    The future for U.S. supercomputing research centers seems uncertain with the National Science Foundation's (NSF) dissolution of the Partnerships for Advanced Computational Infrastructure (PACI) in favor of a "shared cyberinfrastructure." The new plan is designed to shift investment away from basic supercomputer research and toward setting up a hub of computers, high-speed networks, middleware, and distributed databases accessible to all NSF departments. NSF assistant director Peter Freeman credits this development to the fact that modern computers can often deliver sufficient power for a sizable portion of science and engineering research, as well as IT research's shift from supercomputers to capital investments in visualization software, large databases, and similar areas. Illinois' National Center for Supercomputing Applications (NCSA) will direct its energies toward the development of software and research into novel computer architectures under the new NSF agenda, while the San Diego Supercomputer Center will explore large-scale data management methodologies. Recently appointed NCSA director Thom Dunning says the new direction does not signal the end of supercomputing centers, insisting that "they'll be part of the cyberinfrastructure, instead of the cyberinfrastructure." Critics, however, contend that the cyberinfrastructure plan lacks adequate funding, overemphasizes grid computing and other experimental methods, and effectively eradicates PACI, which allowed centers to directly apportion research grants to interdisciplinary science and engineering teams. University of Tennessee computer science professor Jack Dongarra argues that "PACI was in place to foster that interdisciplinary work directly with the centers," and its removal weakens that added value's connection to the center's programs.
    Click Here to View Full Article

  • "Humanoids on the March"
    Economist (03/10/05) Vol. 374, No. 8417, P. 3

    Japan's industrial behemoths are racing to see which of them will produce the most sophisticated humanoid robots in a competition spurred by corporate rivalry, rapid technological advancements, a hunger for publicity, and the potential for tapping a vast new market. Breakthrough humanoid machines from Japan include Honda's Asimo, a walking robot whose speed, agility, and friendliness is being continually tweaked; Sony's QRIO, which walks, navigates by itself, recovers from falls, understands a limited series of spoken commands, and can link wirelessly to the Internet as well as broadcast the visual input received by its cameras; and Toyota's Partner robots, one of which is equipped with artificial lungs, lips, and fingers so it can play a trumpet. Non-humanoid robots currently outnumber humanoid machines in both the industrial and domestic markets, and roboticists such as Carnegie Mellon University's Takeo Kanade say this makes sense from a practical point of view. "The human body itself is not necessarily the best design for a robot, contrary to most people's convictions that evolution has made us the perfect machine," notes Kanade. Beyond publicity, creating humanoid robots allows manufacturers to demonstrate their technological expertise, and reap rewards from supplementary advances that happen en route. One of the long-term prospects for humanoid robots is their role in assisted living situations, especially as the baby-boomer generation enters retirement age. Honda's Jeffrey Smith says, "We human beings have engineered our environment to accommodate our physiology. So a very efficient share for operating in that world is a humanoid one." Smith says such robots will be in homes once the price drops to that of a car, while Sony's Hideki Komiyama expects them eventually to be as common as cell phones.
    Click Here to View Full Article
    (Access to this article is available to paid subscribers only.)

  • "This Net Is Child's Play for Elite High Schoolers"
    Network World (03/14/05) Vol. 22, No. 10, P. 1; Marsan, Carolyn Duffy

    Virginia's Thomas Jefferson High School for Science and Technology, considered to be the leading technical high school in the United States, is especially lauded for its computer science program. The program is unique because it is so immersive: It involves the design and maintenance of the school's intranet and Web site by students, who also become skilled in Linux programming by managing and upgrading production network servers instead of taking formal classes. Administrators select a small number of computer science students annually to function as system administrators for the school's Computer Systems Lab, whose operations were recently augmented with a real-world support framework. In addition to learning server and router installation, students receive instruction on how to perform appropriate testing to guarantee the network's smooth operation during installation. A major project for students is the redesign of the school's intranet, an effort that involves retention of features in a modularized architecture. Project leader Dan Tran, 17, says he is participating in the effort for the fun of it as well as for the opportunity to polish his Web development skills. Artificial intelligence and supercomputer applications are other areas of concentration in the school's computer science program. Single-mindedness is a common quality in the school's computer science students, many of whom go on to become computer science or engineering majors at MIT or other vaunted institutions. MIT admissions director Marilee Jones says, "In my opinion, it's the best public high school in the nation. All their programs are strong...but they have such excellent, excellent teachers there in computer science."
    Click Here to View Full Article

  • "Virtual Therapy: Just What Some Doctors Order"
    Computerworld (03/14/05) P. 32; Rosencrance, Linda

    A small number of American clinics are using virtual reality to help patients deal with phobias and injuries, and researchers say the technology shows promise as a tool for treating addiction and post-traumatic stress disorder, as well as helping distract patients during uncomfortable physical procedures and therapies. Burn victims undergoing painful rehabilitative treatments can immerse themselves in SnowWorld, an icy landscape in which they fly and hit targets with snowballs, which keeps their mind off the pain, according to Hunter Hoffman, SnowWorld developer and director of the University of Washington Human Interface Technology Laboratory's Virtual Analgesia Research Center. Virtual environments for treating anxiety disorders are a specialty of Virtually Better, a company co-founded by Barbara Rothbaum, director of the Emory University School of Medicine's Trauma and Anxiety Recovery Program. She says the company's applications typically involve the user wearing headgear equipped with dual displays, position trackers, sensors, and earphones; and sometimes they use a handheld device to manipulate the environment. Before therapeutic virtual reality applications can go mainstream, they must become more physically and psychologically comfortable, more technically efficient, and more cost effective, says Greenleaf Medical Group President Walter Greenleaf. Virtually Better CEO Ken Graap expects the field of view and resolution of head-mounted displays to be improved within the next five years. He also anticipates the emergence of wireless systems that facilitate at-home virtual reality treatment, while research scientist Skip Rizzo sees more human-like and interactive avatars that can understand and process speech on the horizon.
    Click Here to View Full Article

  • "A Fundamental Turn Toward Concurrency in Software"
    Dr. Dobb's Journal (03/05) Vol. 30, No. 3, P. 16; Sutter, Herb

    Author and ISO C++ Standards committee Chairman Herb Sutter writes that software applications will need to become concurrent because increasingly powerful processors are already starting to lose their ability to fully support existing applications. However, not all key operations of an application are suited to parallelization, and applications are expected to become more reliable on CPUs. This trend toward concurrency can be addressed either through application redesign or more efficient code, which will increase the importance of efficiency and performance optimization as well as force programming languages and systems to become concurrency-enabled. Sutter identifies three approaches to achieving performance gains in new processors for the next few years: Hyperthreading, multicore, and cache. Hyperthreading involves multiple threads running in parallel in one CPU, while the multicore approach has multiple actual CPUs running on one processor. And cache, which CPU designers have used to boost application performance for the past three decades, will allow some existing applications to remain viable for a time without any dramatic redesign. Concurrency--multithreading in particular--is already being employed in mainstream software in order to logically separate naturally independent control flows and enhance performance. The tradeoff is that the concurrent programming model is much tougher than the model for sequential control flow, although it can be learned.

  • "The Moon, Mars and Beyond..."
    Military & Aerospace Electronics (02/05) Vol. 16, No. 2, P. 22; McHale, John

    President Bush's "Vision for Space Exploration" has the lofty goal of manned missions to the Moon, Mars, and elsewhere, starting with sending robots to the Moon as early as 2008. Key to realizing this vision are joint ventures between NASA and industry for developing next-generation spacecraft and their electronic control systems, an example of which is the Demonstration of Autonomous Rendezvous Technology (DART). The DART project is an approximately $95 million effort to demonstrate technologies that allow spacecraft to locate and rendezvous with other craft in space without human intervention. The DART test will involve sending the commercially developed Pegasus Space launch vehicle into orbit where it will dock with an experimental communications satellite; the craft will navigate with a global positioning system (GPS) until it is within about 330 feet of the target, and then execute rendezvous maneuvers using data from additional sensors. DART program manager Jim Snoddy explains that DART uses its Advanced Video Guidance Sensor (AVGS) to pinpoint a spacecraft's precise location, and sends this data to the Automated Rendezvous and Proximity Operation (ARPO) software, which processes the information and commands the ship to turn, throttle, brake, and make decisions to execute docking. DART's core element is the "mission manager" software that supplants human or ground control. The software supports three levels of autonomy: A linear Scripted Mission Manager level that follows preprogrammed orders; an Automated level that permits a certain degree of replanning and contingency preprogrammed by the software designer; and a nonlinear Autonomous Systems level. The Defense Advanced Research Projects Agency's Orbital Express program--an effort to confirm the technical viability of robotic, autonomous on-orbit refueling and satellite reconfiguration, will benefit from the technology coming out of the DART program.
    Click Here to View Full Article

    [ Archives ]  [ Home ]