Association for Computing Machinery
Timely Topics for IT Professionals

About ACM TechNews

ACM TechNews is published every week on Monday, Wednesday, and Friday.


ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either AutoChoice Advisor or ACM. To send comments, please write to technews@hq.acm.org.
Volume 6, Issue 715: Friday, November 5, 2004

  • "Despite Apparent E-Vote Success, Questions Remain"
    Computerworld (11/03/04); Verton, Dan; Thibodeau, Patrick

    Although there were no reports of serious or widespread problems with e-voting machines in the Nov. 2 election, their lack of accountability and auditing makes their accuracy questionable; members of the National Committee of Voting Integrity (NCVI) caution that voters' doubts about the final outcome will linger in the absence of an independent evaluation of the machines. Critics maintain that the e-voting process is deeply flawed, citing incidents reported by grassroots watchdog organizations during Tuesday night's tabulation that signify a nationwide shortage of technical and process standards. Voting technology expert and Iowa State University professor Doug Jones admits that the Nov. 2 election "went remarkably smoothly," but he warns that voter/machine interaction is tough to pin down. "All we can do is things like compare the number of ballots with the number of votes recorded and wonder, 'Why did people come to the polling place to cast a blank ballot?'" he explains. NCVI officials say uncovering the types and severity of voting problems that cropped up on Nov. 2 could take weeks, given the huge amount of news accounts and voter incident reports they must pore through; the organization will also continue to lobby Capitol Hill to fund e-voting standards development at the National Institute of Standards and Technology. NCVI coordinator Lillie Coney reports that her organization's biggest worry is touch-screen systems that do not furnish a separate, hard-copy vote record that can be reviewed and audited independently. Other experts such as Johns Hopkins University professor Avi Rubin harbor doubts about voting results' integrity, citing accounts of poor e-voting security at polling places on the night before Election Day. Rubin concludes that the apparent success of e-voting systems on Nov. 2 is no guarantee that they will run smoothly next time.
    Click Here to View Full Article

  • "Office Space Gets New Meaning at NEC in Japan"
    CNet (11/02/04); Kanellos, Michael

    Japanese computing behemoth NEC has set up a demo in which 500 NEC employees operate in an office area equipped with advanced communications technologies as a cost-cutting, space-saving measure that also serves as a showcase to potential customers. The new technologies incorporated into the office, such as broadband, voice over Internet Protocol (VoIP), and collaborative software, have allowed designers to reduce the number of desks, chairs, phones, printers, and copiers, and thus trim purchasing, usage, and maintenance costs. Meetings and videoconferences take place in chairless areas with no fixed walls, staff examine and manipulate documents on plasma screens using the collaborative software, and VoIP calls on laptops have replaced traditional telephony. The center has yielded a 20% reduction in paper costs, compared to other comparatively-sized NEC groups; a 70% reduction in conference room time and a 15% reduction in travel expenses; and office space savings of 30%. NEC intends to distribute VoIP phones among 30,000 employees and aims to get 80% of them to use "soft" phones in laptops next year. Partly driving the concept of the streamlined office in Japan is the ubiquitous presence of inexpensive broadband connections, and last year's government-launched IP telephony licensing initiative. An upcoming NEC product that could aid future office implementations is a combination NEC switch and NTT DoCoMo handset that eases the transition from 3G handset channels to an integrated Wi-Fi link, which will reduce phone bills. Also under development at NEC are security products such as NeoFace, a facial recognition and confirmation tool that automatically locks down computers when users are away from their desks.
    Click Here to View Full Article

  • "Post Election, Tech Issues Await Congress"
    IT Management (11/03/04); Mark, Roy

    When the U.S. Congress resumes, it will face a number of major technology policy issues whose emphasis has been mostly muted prior to the now-completed general elections. The most immediate issue the regulatory status of Voice over IP (VoIP), which the FCC has been reviewing for a year in preparation for a final ruling on Nov. 9. The ruling will determine whether VoIP is an interstate service that is excused from state and local rules and taxes. A preliminary decision in August declared Internet telephony not exempt from traditional wiretap laws, while Jeff Pulver's Free World Dialup has been excluded from state regulations because customers' free calls are fully Internet-routed and thus never adjoin with the public switched telephone network. The House and the Senate are divided over the issue of whether national anti-spyware legislation is necessary, with the House supporting the measure as evidenced by its ratification of several bills in October. One bill criminalizes intentional, unauthorized computer access and allocates $10 million to fund the development of spyware and phishing countermeasures by the Justice Department, while the other bans deceptive or unfair practices associated with spyware. The fate of the expired Internet access tax moratorium is also waiting to be decided: Both the House and the Senate support extending the moratorium, but the House voted for a permanent extension and the repeal of existing Internet access taxes in nine states grandfathered in the original legislation, while the Senate called for a four-year extension and the retention of the grandfather provisions. Voiding the provisions will cause the states to lose between $80 million and $120 million in annual revenues, according to the Congressional Budget Office.
    Click Here to View Full Article

  • "Hardest Tech Support Job on Earth"
    Wired News (11/05/04); Delio, Michelle

    Far-flung American soldiers around the world can remotely access live technical support 24/7, 365 days a year, through the U.S. Army Engineer Research and Development Center's (ERDC) TeleEngineering Operations Center in Vicksburg, Miss. The advice the ERDC's techies provide is often solicited under battlefield conditions, and is vital to strategic operations: Troops may ask such questions as whether the structural integrity of roads, dams, airfields, and bridges is sound. Soldiers contact and share information with ERDC personnel using a portable TeleEngineering kit with commercial hardware that includes a ruggedized Panasonic laptop, a GPS unit, 3D accelerometers, a video camera, and a communications system with land-line, satellite, and radio connectivity. The system also features custom software that offers field engineers tips on what kind of data they should request, and helps tech support respond fully to soldiers' queries by anticipating potential difficulties in understanding or executing a suggested course of action. The ERDC, which supports all divisions of the U.S. military on an as-needed basis, shields the information that passes between the soldiers and the experts with military-grade encryption. ERDC Vicksburg hosts a high-performance Department of Defense computing facility comprised of five supercomputers capable of 6.5 trillion calculations per second. In all, the ERDC consists of over 2,000 engineers, scientists, and support staff spread throughout seven labs in Illinois, Mississippi, Virginia, and New Hampshire. One facility, the Virginia-based Topographic Engineering Center, recently re-tasked architectural rendering software to build hard-copy 3D topographic maps, which staffer Julie Kolakowski says can be used to enhance mission planning.
    Click Here to View Full Article

  • "Why Carmakers Want to Drive Out Gadgets"
    Financial Times (11/03/04) P. 11; Mackintosh, James

    Automakers and consumer electronics companies are working to replace current in-car electronics systems with those people normally carry around with them, such as mobile phones, cameras, PDAs, and portable music players using the standard MP3 format. Besides providing users a way to leverage the enhanced audio capabilities of their car or enable hands-free operation, these consumer electronic interfaces would free car companies from having to worry about obsolescence. BMW's Palo Alto research center recently developed an iPod interface that allows owners to connect their digital music library to their car stereo system in at least one BMW model, but other manufacturers have not yet embraced built-in standard electronic devices, such as an in-dash MP3 player. BMW interface developer Greg Simon says Bluetooth wireless technology could help ease the introduction of consumer electronics in cars. Automakers earn high profit margins from factory-installed electronics systems and might be wary of ceding those features to the consumer electronics industry. Also, electronics systems in cars need to be made more secure than in consumer electronics, especially if system failures or hacker attacks could lead to critical component malfunctions. Should cars allow connections with standards-based consumer electronics technology, there would also have to be new cradles and other holders to keep gadgets from crowding drivers' spaces or sliding around. But allowing owners to plug in or otherwise connect their portable gadgets could open up opportunities for automakers, not the least of which would be lower costs. However, another issue is that information technology constantly evolves, so finding standards that can last the average seven-year life of a car will be hard. Ford Motor Co. chief technical officer Richard Parry-Jones says car makers will have to decouple "the electronic environment in our cars from the car in order to keep the electronic environment up to date."

  • "For Your Viewing Pleasure, a Projector in Your Pocket"
    New York Times (11/04/04) P. E5; Eisenberg, Anne

    Corporate and academic research laboratories have developed prototype "miniprojectors" designed to beam high-quality images onto flat surfaces. Such devices could be commercialized as add-ons or embedded technology. V. Michael Bove Jr. of MIT's Media Lab reports that cheap miniprojectors will need to overcome power consumption issues: Batteries in handhelds currently provide only a few watts of electricity, and therefore Bove believes that the projectors will initially be marketed as laptop accessories. Once miniaturization is accomplished, Bove expects to see embedded miniprojectors that can beam control panels, maps, or other images onto car dashboards. A group led by Ramesh Raskar of Mitsubishi Electric Research Laboratories has demonstrated a small projector that could be attached to organizers, cell phones, or digital cameras, as well as ways people will interact with projector images in the future: One technique stabilizes the image so that it will not shift with subsequent hand movement once it is fixed on a flat surface, while another enables users to click and drag items within the projected image. A miniprojector prototype from light-emitting diode (LED) manufacturer Lumileds Lighting was built to demonstrate LED capabilities, says business development director Steve Paolini, who notes that the device was designed for personal use. Meanwhile, U.K.-based Light Blue Optics is developing a handheld projector that generates 2D holographic images using lasers, and company director Adrian Cable reports that his group has also improved image quality and processing speed. "We want a device that you can download films to, press a button and see a huge screen projection," he explains.
    Click Here to View Full Article
    (Articles published within 7 days can be accessed free of charge on this site. After 7 days, a pay-per-article option is available. First-time visitors will need to register.)

  • "Africa Calls for More Cyber-Rights"
    allAfrica.com (11/02/04); Mutume, Gumisai

    African nations should speak up to claim more control over their share of the global Internet resource, since that medium is now controlled mostly by overseas corporate interests and ICANN, an appointee of the U.S. government. The second World Summit on the Information Society (WSIS) in Tunisia this month will provide a forum for disenfranchised countries to make their voices heard, and a U.N. working group is expected to present its findings at the WSIS meeting after studying the issue of global Internet governance. Among the complaints of poorer nations is the non-negotiable fee they must pay to ICANN-appointed registrars, ICANN's unilateral dispute resolution system, and other terms that are forced upon them. South Africa has been most vociferous among the countries protesting the current situation, and succeeded recently in taking control of its country Internet domain, .za, from an ICANN-approved group; the .za country domain is now administered by a panel appointed by the communications ministry and is more representative of the South African people, according to the government. African countries are also joining together to form the African Network Information Center that will represent African interests in the global Internet governance community. Another issue for African countries is the vast "digital divide" that has left most Africans without a voice on the Internet: Not only does sub-Saharan Africa lack Internet-enabled computers, but countries therein lack localized content; this is in part due to the fact that English has become the de facto standard language online, facilitating global e-commerce. If the Internet could be structured around societal interests rather than commercial ones, perhaps there would be greater chance for other languages to take hold. African nations also need to consider important questions about how to keep the Internet an independent medium and prevent authoritarian governments from imposing controls on users outside their borders.
    Click Here to View Full Article

  • "Presumed Guilty: Paying for Piracy in Advance"
    PCWorld.com (11/03/04); Yegyazarian, Anush

    Anush Yegyazarian writes that content owners in the U.S. and elsewhere have instituted a levy or fee on the sale of recording products and/or blank media in order to compensate for money presumably lost to piracy. She warns that this approach can negatively affect consumers, as it operates on the principle that every person who buys the media or recording equipment is guilty of piracy, even if they do not commit piracy. Yegyazarian adds that the royalty fee scheme constitutes a further erosion of consumers' fair-use rights. Another potential effect is an increase in product prices in countries that do not subscribe to a royalty fee model, spurred by vendors being pressured to slash their own profits to pay royalty fees while maintaining product affordability. Yegyazarian also points out that these fees come in addition to other anti-copying strategies: "I still have to deal with copy-protected DVDs and CDs, restrictive download services, and the threat of lawsuits from the Recording Industry Association of America," she writes. One alternative scheme suggests that users of peer-to-peer (P2P) software and services pay a fee to reimburse copyright holders for losses stemming from P2P file sharing, in exchange for the right for P2P services to distribute copyrighted content over their networks, as well as be legally absolved from having done so in the past without proper authorization. Yegyazarian counters that this would only be a transitory measure, as users will inevitably find another way to exchange pirated content while dodging prosecution; legitimate music services that have labored to find customers would consequently pay a price. The author concedes that the permanence of digital piracy means that consumers will have to tolerate a certain degree of digital rights management, and this could entail a rise in DVD and CD prices as content owners embed what they consider to be acceptable costs into their products to give consumers the flexibility to use digital entertainment as they see fit.
    Click Here to View Full Article

  • "Maximizing the Internet's Hidden Resources"
    IST Results (11/04/04)

    A diverse array of new services can be extracted from hidden online resources, including underused PCs and user expertise, with a combination of peer-to-peer (P2P) computing and mobile, high-access bandwidth. IST's Market Managed Peer-to-Peer Services (MMAPS) project offers a Java-based toolkit outlining unique non-payment based accounting schemes designed to encourage contributions from community members. One scheme rates individual members by their contributions, and the rating determines how peers provide services to those individuals. "We came to the conclusion that non-price based constraints are the most appropriate form of incentive scheme for most P2P systems," notes project technical director Ben Strulo with BT Group's Network Research Center. "For example, locally applied rules that related the number of service requests made by a peer to the number of services provisions it makes can lead to highly efficient and sustainable systems." Strulo says the MMAPS strategy could also be embraced and championed by the Global Grid Forum and similar research communities, as well as by former Sun Microsystems research project JXTA, which set up open, generic protocols enabling communication and collaboration between all network-supported devices. MMAPS partners also devised P2P applications such as RadCITA, which allowed members of far-flung communities to get fast diagnoses from a virtual team of health professionals; P2PWNC, a service whereby local WLAN administrators connectivity-enabled roaming nodes in return for future connectivity on their local WLAN network; and a Restaurant Recommendation Service that facilitated the exchange of restaurant reviews between end users.
    Click Here to View Full Article

  • "Cricket Chirping Provides Voice for Interior GPS"
    MIT News (10/27/04); Fizz, Robyn

    Researchers at MIT's Computer Science and Artificial Intelligence Laboratory have developed Cricket, a wireless indoor location system, as part of the lab's Project Oxygen initiative, whose goal is to provide ubiquitous computation and communication. Cricket is designed for use in places beyond the reach of the Global Positioning System, and Cricket developer and MIT professor Hari Balakrishnan thinks the technology could be widely employed in games and entertainment, environmental control and monitoring via sensor networks, and robot and human navigation and discovery. A Cricket scheme involves the placement of wall- and ceiling-mounted beacons within a building; the beacons emit a radio-frequency (RF) "chirp" and an ultrasonic pulse simultaneously, while receivers attached to handheld devices pick up both signals, and gauge distances between the devices and the beacons by running software based on the difference between RF and ultrasound propagation speeds. Cricket uses open source software that eases the coding of location-aware applications by other people. The second major version of the Cricket software, which was issued last July, is the first to be commercialized. Research groups, hospitals, and corporations are among the technology's early adopters. The Sloan Foundation, the National Science Foundation, and the MIT Project Oxygen Partnership serve as Cricket's underwriters.
    Click Here to View Full Article

  • "Progress in an Ancient Tongue"
    Wired News (11/05/04); Heavens, Andrew

    Ethiopian researchers say their native Amharic alphabet should come to text messaging on mobile phones, which would help the country bridge the digital divide and become more competitive in a global economy. The country's infrastructure minister said last month that Ethiopia was one of the least connected nations in the world with many rural areas without telecommunications services at all. The national Ethiopian Telecommunications provider opened up mobile phone service last December, and now researchers at Addis Ababa University have created a roadmap for bringing the ancient Ethiopian alphabet to text-messaging applications; the study details how users could write 210 characters using the nine keys available on phones--the researchers were not able to include all 345 letters in the Amharic alphabet in their study because of the memory limitations for many mobile phones. The researchers also included a statistical study of the written language and devised ways to shorten composition through predictive text inputting, where software could determine users' intentions and automatically enter words. The Ethiopian alphabet dates back to the fourth century and is shared by a number of languages in the region, the most prominent being the state language of Amharic. Addis Ababa University computer science researcher Solomon Atnafu says Ethiopian-script text-messaging could have a profound impact on the country's economy, allowing farmers to access grain prices in the capital or to receive weather alerts, for example. Mobile phones are much cheaper than computers, he notes. The researchers hope to engage Nokia in talks, and have already garnered interest from local Nokia distributors.
    Click Here to View Full Article

  • "Pompeii Gets Digital Make-over"
    BBC News (10/31/04)

    The foundation for an entirely new approach to cultural tourism could be laid down by the European Union's Lifeplus project, which has developed a prototype augmented-reality system that could allow tourists to enjoy computer-enhanced views of archeological sites. The technology combines animated digital elements with the actual view seen by tourists as they roam throughout a historical location through the use of a camera-equipped head-mounted display and a backpack computer. The system software interprets the tourist's view and accurately matches up the virtual and the real-world components. Miralab professor Nadia Magnenat-Thalman notes that augmented reality has advanced to the point where its applications extend beyond computer games. "We are, for the first time, able to run this combination of software processes to create walking, talking people with believable clothing, skin and hair in real time," she reports. Lifeplus operates under the auspices of the EU's Information Society Technologies program, whose goal is to promote user-friendly technology and elevate Europe's cultural legacy. The Lifeplus project is fueled by a new desire to revisit the past, according to Andrew Stoddart of 2d3, the company that developed the Lifeplus system's software. "The popularity of television documentaries and dramatizations using computer-generated imagery to recreate scenes from ancient history demonstrates the widespread appeal of bringing ancient cultures to life," he remarks.
    Click Here to View Full Article

  • "Stumping for Specs"
    EE Times (11/01/04) No. 1345, P. 1; Merritt, Rick

    Voter disenfranchisement during the 2000 presidential election has spurred demands for greater election scrutiny, and a group of 11 leading computer scientists is competing with the Institute of Electrical and Electronics Engineers (IEEE) to establish national e-voting system standards that support reliability, security, and verifiability before the Election Assistance Commission (EAC) completes its own specifications. Together, the scientists have founded the Voting System Preference Rating (VSPR) initiative to describe a subtler technology criterion that covers electronic security issues outside the Federal Election Commission's voting system standard. VSPR leader and cryptographer David Chaum says, "With VSPR you could get something like a Consumer Reports review of a voting system." IEEE's P1583 team, meanwhile, has been working for over three years to define voting system standards, and is preparing to issue a second draft. The current P1583 draft lists the provision of paper ballots to verify votes as optional, but Stanford computer scientist and VSPR co-founder David Dill argues that a printed audit trail should be required. P1583 member and Harvard Fellow Rebecca Mercuri laments, "At the end of the day if someone demands a recount, they can't get one. They can only push a button to generate a new report." EAC technical committee member and consultant H. Stephen Berger asserts that a profound lack of funding on national, state, and local levels lies at the heart of the U.S. election system's problems--a view echoed by Mercuri, who recalls that the National Science Foundation denied her a grant for e-voting security research. Meanwhile, standards research is taking place concurrently with voting system acquisitions mandated by the Help America Vote Act, which requires that states buy new voting equipment by 2006, about the same time the standards will be finished.
    Click Here to View Full Article

  • "IT Security Is the Industry's Burden"
    Government Computer News (10/25/04) Vol. 23, No. 31, P. 20; Jackson, William

    The National Strategy to Secure Cyberspace is extremely important, but its implementation has been weak, says Cyber Security Alliance of Washington leader Paul Kurtz, whose last post was special assistant to the president and senior director for critical infrastructure protection. Kurtz says he left the government in part because he felt he could do more for cybersecurity working in the private sector, but adds that working on the president's Critical Infrastructure Protection Board was one of the most positive things he did for cybersecurity. The Cyber Security Alliance is a public policy advocacy group formed by IT security industry leaders who are identifying priorities that need more government research funding. Kurtz believes cybersecurity should be approached from a business-risk viewpoint, given that most of the owners and operators of critical infrastructure are members of the private sector. Much as been done in government IT security, but there is still a ways to go and the laws already in place need more resources to support them. Federal coordination, a common set of information security principles, and contingency planning also need attention, Kurtz adds. Kurtz says, "There is a fundamental misunderstanding of the importance of cybersecurity in government and in the private sector," and says the implementation of the National Strategy to Secure Cyberspace was hurt by the lag in staffing The Information Analysis and Infrastructure Protection Directorate. However, he says the government cannot and should not shoulder the entire burden of protecting cyberspace. He notes the private sector owns and operates the Internet infrastructure, and therefore enterprises and users must take the necessary steps to guard against cyberterrorism, which he says has so far remained just a threat.
    Click Here to View Full Article

  • "Better Software Through Source Code Analysis"
    InfoWorld (11/01/04) Vol. 26, No. 44, P. 47; Udell, Jon

    Source code analysis can bolster software's reliability and security. Such analysis is becoming increasingly critical due to security concerns, and achievable thanks to boosts in available computing power through Moore's Law. Source-code analyzers can help enforce best programming practices expressed as code patterns, as well as standard naming conventions and application programming interface (API) usage patterns; furthermore, analyzers help eliminate resource leakage and vulnerable use of APIs. Analysis with an interprocedural or global scope is the most powerful type of analysis: The process involves comparing patterns detected in one function or method with patterns detected elsewhere within the program, and the analyzer does this by monitoring the data stream across the entire program, building a program model, and simulating execution paths. The context in which the program operates should not be discounted in interprocedural analysis. The core ingredient of all source code analyzers is comprised of the rules that define patterns of error, and analyzers furnish a generic series of rules and usually allow customers to extend that set with rules cataloging their specific system and programming practice knowledge. It is theoretically possible to represent such rules in a standard format to facilitate direct analyzer comparison and consolidation of knowledge about common patterns, but the practical application of such a principle is an iffy proposition. Microsoft's Chris Lucas believes that source code analysis is becoming more effective thanks to the refinement of rules rather than techniques, while Coverity analyst Benjamin Chelf explains that recent analyzers offer more balanced analysis.
    Click Here to View Full Article

  • "Data Centers Get a Make-over"
    Computerworld (11/01/04) Vol. 32, No. 44, P. 23; Anthes, Gary H.

    The transition to distributed and virtual processing, ultradense server racks, a need for instant fail-over, and new requirements for IP telephony and voice over IP are driving changes in the design, hardware, and function of data centers. Affordable heat management is perhaps the most formidable challenge in data center operations: California Data Center Design Group President Ron Hughes reports that a typical data center consuming 40 watts of power per square foot costs $400 per square foot to construct, when including the costs of air conditioning, uninterruptible power supply units, power generators, and related hardware; he thinks construction costs could skyrocket to $5,000 per square foot by 2009. Business Technology Partners President Joshua Aaron believes data center design will be increasingly determined by communications needs, especially with the spread of voice over IP. He anticipates that future design will incorporate power-failure relays to support 911 service, along with backup power hardware for voice gateways, media gateways, and IP phones. Another factor in data centers' evolution is increasingly flexible, dynamic, and user-transparent workload processing distribution, due to the availability of inexpensive "dark" fiber and new virtualization software. Aaron notes that this strategy speeds up and eases disaster recovery and helps avert single points of failure. This development is spurring the creation of co-production data centers, which Fannie Mae facilities manager Terry Rodgers says must deliver "continuous availability" with instant fail-over. A Tier IV data center with two independent electrical systems is needed to support continuous availability.
    Click Here to View Full Article

  • "Creator: Ivan Sutherland"
    Desktop Engineering (10/04) Vol. 10, No. 2, P. 28; Dalton-Taggart, Rachael

    Ivan Sutherland is best known for the 1963 MIT Ph.D. thesis that led to Sketchpad, the first graphical user interface, which paved the way for much of the computer-aided design (CAD) industry. Cyon Research's Dr. Joel Orr recalls that Sutherland's paper and dissertation served as a blueprint for the interaction between people and computer graphics. "So clear and comprehensive was his exposition, that it wound up being a master plan for the computer graphics industry--as the hardware advanced to the point of being able to inexpensively express all that Sutherland outlined," he explains. Sketchpad introduced many tools and methods of enduring value, such as a recursively traversed hierarchical framework for modeling graphical objects, recursive techniques for geometric transformations, and a display file for screen refresh. During his tenure at the University of Utah, Sutherland and David Evans founded the school's Computer Science program, and helped cultivate an exceptional group of inventors and future industry leaders, including laptop creator Alan Kay, Adobe Systems founder John Warnock, and Silicon Graphics and Netscape founder Jim Clark. As a Caltech professor in the late 1970s, Sutherland made sizable contributions to the introduction of integrated circuit design as an academic discipline, which led to chip design advances and the foundation of Silicon Valley. In 1988, Sutherland's groundbreaking work with computer graphics earned him an ACM Turing Award. "Sutherland hit on techniques so elemental, that even with the incredible advances we attribute to 40 years of Moore's Law, they are still relevant," asserts AEC Automation News editor in chief Randall Newton.
    Click Here to View Full Article

  • "The End of Innovation?"
    Electronic Business (10/04) Vol. 30, No. 10, P. 42; Roberts, Bill

    Electronics executives are growing concerned that shrinking federal budgets for basic research could have a catastrophic effect on America's industry and economy. The Alliance for Science & Technology Research in America estimates that federal investment in math, engineering, and the physical sciences declined from 0.25 percent of gross domestic product (GDP) to 0.16% between 1970 and 2003; the GDP has increased 100% since 1980 to total $12 trillion, but federal investment in basic research has fallen by about one-third. The American Association for the Advancement of Science finds that the 2005 R&D budget continues the trend of the last few years to increase investment in such areas as weapons development, homeland security, and the National Institutes of Health, while cutting back investment in other areas. Unless this trend is reversed, the association projects that within five years the Pentagon science and technology R&D budget will have plummeted by over 17% from 2004, while funding for the National Science Foundation will have fallen 4.7%. Complementary metal-oxide semiconductor (CMOS) technology is expected to hit a wall within 16 years, but investment in R&D projects to develop a replacement for CMOS are underfunded by $1.5 billion annually, an outrageous state of affairs in light of the key role CMOS plays in the U.S. economy and defense. Furthermore, there has been a 52% decrease in the number of industrial patents filed in the U.S. by Americans; U.S. engineering undergraduates are outnumbered by their Chinese, Japanese, and European counterparts; and Americans' percentage of Physical Review articles has dipped by about one-third since 1983. Many people blame the R&D situation on short-term thinking among politicians and executives, which was encouraged by a false sense of security stemming from the economic boom of the 1990s. Robert Atkinson of the Progressive Policy Institute laments that "most economists and policy makers don't really think technological innovation is that important to economic growth."
    Click Here to View Full Article

  • "Will Gesture-Recognition Technology Point the Way?"
    Computer (10/04) Vol. 37, No. 10, P. 20; Geer, David

    Gesture recognition offers potential benefits to fields that include surgery, automotive technology, prosthetics, gaming, security, and surveillance, but the technology needs significant refinement in order to be commercially viable. "Gesture recognition must prove it can yield results that existing peripherals can't already achieve, or users won't see the point in spending the time and money on the technology," notes Gartner Fellow Jackie Fenn. Gesture recognition systems collate gesture data through image- or device-based hardware approaches: The former technique involves the image capture of a user's motions while in the process of gesturing, and the latter uses position trackers whose movements are translated into commands. Some systems have started to integrate these two techniques to collect more gesture data and facilitate more precise recognition. Meanwhile, the practicality of gesture recognition for widespread use is increasing thanks to declining costs for hardware and processing, according to Sony's Richard Marks. Cybernet has rolled out or is planning to roll out an array of gesture recognition products, such as UseYourHead 2, a game controller that translates head movements into commands by studying changes in facial color and hue saturation; and NaviGaze, a similar system that lets disabled people navigate online via head movements and eyeblinks. Among the obstacles gesture recognition must overcome in order to be widely accepted is the intrusiveness of input devices such as motion-tracking gloves, the lack of a universal gesture language, distortion of input due to busy or confusing backgrounds or variant lighting, and slow, resource-intensive image processing. Fenn expects gesture recognition technology to be limited to niche categories for the next few years because of the considerable investment required to create mainstream applications.
    Click Here to View Full Article