Association for Computing Machinery
Timely Topics for IT Professionals

About ACM TechNews

ACM TechNews is published every week on Monday, Wednesday, and Friday.


ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either AutoChoice Advisor or ACM. To send comments, please write to technews@hq.acm.org.
Volume 6, Issue 614: Friday, March 5, 2004

  • "Dueling Viruses Are Latest Computer Pest"
    Washington Post (03/04/04) P. E1; Musgrove, Mike

    Virus authors seem to be using their latest malware not just to hijack victims' computers, but to snipe at each other for some as yet unclear reason, according to security experts. Five new versions of the MyDoom, Bagle, and Netsky bugs were discovered on the Web in the space of three hours on the morning of March 3, and embedded within the coding of the viruses were threats and insults such as "MyDoom.f is a thief of our idea!" and "Bagle--you are a looser!!!" IDefense director of malicious code Ken Dunham suggested that Bagle and MyDoom's authors appear to be struggling for remote control of the systems their bugs have commandeered, while Netsky's creator is attempting to neutralize the others. Meanwhile, consumers and businesses worldwide are facing a barrage of virus-laden emails, some of which appear to come from supposedly trusted sources. McAfee Security said the latest Bagle variant has stopped up computer networks at a number of Fortune 500 companies, while MessageLabs reported that as many as one out of 19 emails were contaminated by a Netsky variant on Wednesday morning. Since their debut in January 11 versions of Bagle and seven variants of MyDoom have appeared, while six versions of Netsky have been released since February. Some computer security specialists believe the original authors of the Netsky and Bagle viruses may be responsible for the insulting or threatening statements in the code of the latest variants, because those bugs' source code has not been published on hacker sites. Network Associates virus research manager Craig Schmugar said the latest bugs were probably not written as direct responses to each other because they take too long to write.
    Click Here to View Full Article
    (Access to this site is free; however, first-time visitors must register.)

  • "'To Speak to an Operator, Start Swearing Now'"
    Financial Times (03/05/04) P. 8; Rigby, Rhymer

    University of Southern California researchers are working on software that switches from automated telephony to a live operator by sensing that a caller is angry, which could boost customer loyalty if the technology can be practically applied. USC professor Shrikanth Narayanan explains that a great deal of emotional information is communicated within the speech signal of spoken language: "The energy of the signal is one cue, the speech rate is another cue and there's also lexical information such as swear words," he notes. "Then there are patterns of interaction that deviate from the norm, and so on." The software ascertains the likelihood that a caller is getting angry by integrating the data from these various sources. The system was trained to identify anger cues by examining 1,400 recorded phone conversations from an airline call center, and Narayanan says the software has an 80 percent to 85 percent accuracy rate in tagging angry callers. The USC professor thinks as long as two years will pass before the software is ready for commercialization. Manchester University's Martin Barry cautions that the software could be less effective when callers speak with certain regional accents, while the system could also be abused by callers who intentionally swear to avoid automated telephony altogether. Narayanan believes the technology could be employed in virtual reality training systems to measure the emotional states of users, and has engaged in such a project with the U.S. military. Another potential application is educational toys that combine voice and visual information to deduce a child's emotional state and form an appropriate response.
    Click Here to View Full Article

  • "Lurking "Spyware" May Be a Security Weak Spot"
    New Scientist (03/04/04); Knight, Will

    Researchers led by Steven Gribble at the University of Washington in Seattle conducted a study of campus network traffic and discovered that 5.1 percent of all Internet-connected systems had one of four known spyware programs--Cydoor, eZula, Gator, and SaveNow--running on them, while 69 percent of all departments and offices had at least one machine running the programs. The results of the study suggest that one in 20 Net-linked computers may be hosting spyware, which usually piggybacks on a system with other "free programs" and is used to record keystrokes or Web browsing activity or generate annoying pop-ups, among other things. However, Gribble and his colleagues determined an even more nefarious use for spyware through their study: Two of the programs they inspected, eZula and Gator, could be employed to run unauthorized code on a computer. The researchers learned that hackers could penetrate and hijack computers running these programs by using specially tailored network packets designed to convince the spyware that it was being sent a legitimate software update. "The danger is that the lack of visibility of this kind of software will mean alerts about vulnerabilities either don't get generated, or people won't pay attention to them," notes Gribble. He also points out that the study's findings are probably a conservative estimate of how much spyware is really out there, given that the university's computer users are more technically savvy than average users, and are less likely to unknowingly install spyware. The proposed SPYBLOCK Act may help regulate spyware, but Gribble thinks this is only a partial solution. His team believes user awareness of the problem must increase, while tools that can remotely scan for spyware-infested computers must also be used.
    Click Here to View Full Article

  • "Putting Wireless to the Test(bed)"
    TheFeature (03/04/04); Pescovitz, David

    U.S. university researchers have established a distributed wireless testbed for the integration and testing of new wireless communications technologies. Dubbed WHYNET and funded with a $5.5 million National Science Foundation grant, the meta-testbed consists of the linked wireless networks at five University of California campuses and the University of Delaware. UCLA researchers are testing wireless sensor networks on WHYNET that one day could track seismic stability in skyscrapers or the environment in a forest. The main focus of WHYNET is the interaction of physical layer elements such as the radio, antennas, and protocols, and how those effects play out in wireless network performance, says UCLA computer science professor and WHYNET principal investigator Rajive Bagrodia. Often, prototype technologies are demonstrated in the laboratory setting but then encounter unexpected obstacles in variable real-world situations; WHYNET aims to prevent that from happening by allowing researchers a broader and more realistic research environment that is still controllable. Bagrodia says WHYNET is also about building up a basic body of knowledge concerning wireless communications, such as why more transmissions or inclement weather causes interference; several WHYNET projects are addressing these concerns, including research into smart base-station antennas that interact with multiple receiver antenna elements to lock in on the best signal, or using cellular handsets as peer nodes to improve coverage. Perhaps the most daunting challenge for WHYNET researchers is integrating existing communications protocols that have been built with different business requirements and performance targets in mind, says WHYNET investigator Ramesh Rao.
    Click Here to View Full Article

  • "Warning: Blogs Can Be Infectious"
    Wired News (03/05/04); Asaravala, Amit

    Researchers at Hewlett-Packard Labs used Intelliseek's BlogPulse Web crawler to mine numerous Weblogs, after which they mapped out the connections and topics shared among a large number of sites. Analysis showed that topics would often appear on a small number of relatively obscure blogs a few days before showing up on more popular sites. "There is a lot of speculation that really important people are highly connected, but really, we wonder if the highly connected people just listen to the important people," explains HP Labs researcher Lada Adamic. The team learned that when an idea "infected" at least 10 blogs, 70 percent of those blogs failed to supply links back to another blog that previously mentioned the idea, so the researchers devised methods to deduce the point of origin of information by noting textual, link, and infection rate similarities. "What we're finding is that the important people on the Web are not necessarily the people with the most explicit links [back to their sites], but the people who cause epidemics in blog networks," says HP researcher Eytan Adar. The scientists have encapsulated their techniques into the iRank search algorithm, which ranks sites according to how well they inject ideas into the mainstream. Future plans include making iRank resistant to Google-bomb-type attacks, while some of the team's research is accessible online via the Blog Epidemic Analyzer program. The HP Labs research could help sociologists chart the course of knowledge epidemics, which marketers could also exploit to sell their products directly to the most influential members of a group.
    Click Here to View Full Article

  • "A Car That Drives Itself? He's Working on It"
    SiliconValley.com (03/05/04); Langberg, Mike

    The Defense Advanced Research Projects Agency's (DARPA) Grand Challenge is a March 13 race from Barstow, Calif., to the California-Nevada border: The racers are automated vehicles that will be required to cross some 200 miles off-road without human assistance, their only guidance being speed limits and a series of coordinates disclosed just two hours before the race. DARPA organized the contest in an effort to encourage researchers and engineers throughout the country to develop autonomous vehicles that could have military applications, after the agency's own initiatives to create such robots yielded little progress. The teams building the vehicles are required to incorporate radar, sonar, laser beams, and video cameras so that the surrounding landscape can be accurately perceived, as well as work out a way for the vehicles to negotiate obstacles. Adding to the difficulty is the effects of speed on the vehicles' onboard computers. Velodyne Acoustics founder David Hall's Team Digital Auto Drive has been modifying a Toyota Tundra pickup to compete in the Grand Challenge by outfitting its cab roof with high-resolution video cameras protected by glass against dust, water, dirt, and mud, while a pair of 1 GHz digital signal processing chips will examine 60 frames a second. Other processors will assess 100 potential routes per second and select the optimal path. The first automated vehicle to reach the finish line within 10 hours will win $1 million, but Hall is doubtful that any vehicle, including his own, will successfully complete the course. He expects the Tundra to stop short at some point, either because of a computing error or a mechanical malfunction, but he hopes DARPA will be suitably impressed with his efforts to fund further development of his technology.
    Click Here to View Full Article

  • "Tools Let Network Operators See Their Way to Security"
    Champaign News-Gazette (Il) (03/04/04); Kline, Greg

    National Center for Supercomputing Applications (NCSA) researchers have developed a new network monitoring tool that puts raw network data in visual form so human operators can process it more quickly and effectively. The idea for the new software tools came after two NCSA researchers discussed how the network was like a cornfield--an apt analogy given the NCSA's East Central Illinois location--that people could more effectively watch from a bird's-eye view. The NVisionIP and VisFlowConnect tools allow network operators to view network topology, including internal and external connections, with color-coded web maps, graphs, and charts. They can zero in on individual machines for more detailed inspection. "Raw data is hard to go through," says NCSA chief security engineer Jim Barlow, one of the researchers who first conceived of the tools. "A picture's worth a thousand words." Humans are able to understand visual images and identify emerging threats faster, while automated network alarms sometimes fail to catch new threat patterns that human operators will be able to quickly detect with a visual system. The University of California, Berkeley, MIT, and Stanford University have already signed up for the new software, which is being made available this month. The project was funded by the Office of Naval Research to help strengthen network defense in military operations.
    Click Here to View Full Article

  • "Walking 'Signature'"
    Washington Times (03/04/04); Geracimos, Ann

    Researchers are working on gait signature technology whose applications, if perfected, could include security surveillance, medical diagnosis of movement disorders, and computer animation. Human motion signatures would be determined by the same techniques facial-recognition software employs to identify people's faces under variant lighting, angles, or expressions. "I wanted to extract an individual's personal style and the explicit manner in which they move in a way that translates across different motions and is consistent," notes New York University research scientist Alex Vasilescu, whose work on facial recognition earned her accolades from MIT's Technology Review last year. The first step is to capture a person's gait and analyze their movement through playback to ascertain the signature and all its nuances, while the next step is to determine how close a signature motion is to normal, which would allow the progress of a person undergoing physical therapy to be mapped, for instance. Factors that can complicate the accurate matching of gait patterns to individual persons include shadows, variable walking surfaces, and any loads a person may be carrying. The Technical Support Working Group's David Herrington lauds Vasilescu's research but says human gait studies may not be practical for perhaps a decade. The Defense Advanced Research Projects Agency's Human Identification at a Distance initiative studied several university efforts to develop computer algorithms to identify people based on their walk, but the project, which shared the same office as the controversial Total Information Awareness project, was terminated last fall. The University of Maryland's Larry Davis reports that the technology was not mature enough to be added to any surveillance methodology, in any case.
    Click Here to View Full Article

  • "Hands Off! That Fact Is Mine"
    Wired News (03/03/04); Zetter, Kim

    The Database and Collections of Information Misappropriation Act, which is expected to be reviewed by the House Commerce Committee on March 4, is drawing controversy because its provisions essentially permit certain companies to own and license facts, making anyone who copies and redistributes those facts without authorization vulnerable to criminal prosecution, critics contend. The bill's major supporters are LexisNexis database owner Reed Elsevier, leading legal database publisher Westlaw, and the Software and Information Industry Association; opponents include the American Association of Libraries, Yahoo!, Google, Verizon, Charles Schwab, and Bloomberg. Public Knowledge's Art Brodsky says the bill would allow anyone to monopolize facts entered into a database or a collection of materials, in direct violation of the Copyright Act, which stipulates that ownership does not extend to information and ideas. Commercial database companies counter that if they cannot build databases because of theft, then the public will not be able to access the information. Opponents argue that the legislation would make facts available only to those who can afford them, adding that databases are already protected by copyright statutes and usage agreements. Keith Kupferschmid of the Software and Information Industry Association claims the bill would be inapplicable in cases such as researchers using facts taken from databases to compose academic papers. "The bill only applies where someone takes a substantial portion of the database and uses it in a way that causes commercial harm to the provider of the data," he explains. However, Joe Rubin of the U.S. Chamber of Commerce says the bill places no restrictions on the amount of information a person has to take from the database to break the law.
    Click Here to View Full Article

  • "Paul Debevec on Illuminating Effects"
    Digital Post Production (02/15/04); Kaufman, Debra

    Paul Debevec of the Institute of Creative Technologies (ICT) has spent much of his career developing techniques to integrate computer graphics with computer vision to achieve more realistic visual effects, and he notes that attaining such realism greatly depends on giving computer-generated characters the appearance of being naturally illuminated by their surrounding environments. He has devised a solution that utilizes high dynamic-range photography methods to capture a panoramic image of real-world lighting, and says that Image-Based Lighting and High Dynamic Range Imaging can now be employed to light CG objects with images of captured illumination, as well as accurately simulate shadows. Debevec notes that digitizing reflectance properties--how light is reflected off costumes, props, and faces, for example--is a formidable challenge, but he thinks ICT has made progress in this area by filming objects illuminated by a moving overhead neon tube and analyzing their reflectance characteristics. He says they have successfully created digital models of a number of objects that boast the same reflectance properties as their real-world counterparts, and expects that more sophisticated and three-dimensional objects will be rendered in this way so that filmmakers could one day use a library of CG objects to employ in any virtual setting. ICT is also exploiting the same illumination capture methods used for artificial objects to tackle the problem of making actors filmed on a green screen seamlessly integrate with background plates or CG sets. Debevec says a prototype lighting stage has been constructed out of a sphere of inward pointing color LEDs that can recreate location or virtual illumination and direct it at actors. "My hope is that this sort of system will give cinematographers...the choice of having lighting that is anywhere between scientifically realistic and artistically interpretive," he states.
    Click Here to View Full Article

  • "NASA Considers Open-Source License to Publish Software"
    Government Computer News (03/02/04); Jackson, Joab

    NASA is creating a license so that it can release its own applications as open-source software, and says that a final version should be ready by early summer. NASA has submitted a draft to the Open Source Initiative, which accredits open-source software standards. NASA lawyer Bryan Geurts says the agreement includes indemnification from third-party usage liabilities and a voluntary request for reporting adoption. NASA employee Patrick Moran argues that the agency could benefit from more use of open-source, including reviews of source code by more programmers--which could turn up more bugs and potential problems. Some of NASA's software is already available for educational use, though the underlying source code remains unavailable. Releasing the software under an open-source license would allow schools and research laboratories to more easily access NASA data for study.
    Click Here to View Full Article

  • "The Future of Computing Part 5: Evolution and the Bump"
    OSNews.com (02/26/04); Blachford, Nicholas

    Computing continues to evolve smoothly as software tools become more lithe and powerful and hardware scales in speed, but eventually limits to hardware development will force a jolting change in computing. Software evolution is shedding more complexity as programmers take to scripting languages such as Python and Python-related projects; eventually, application server and middleware vendors will begin to ship applications themselves that unsophisticated programmers can customize with the help of logic tools. Business development programming will not be challenging any longer, and many software experts will take to open-source solutions simply for the challenge. As companies become involved in open-source development, solutions will address outstanding integration issues. In the hardware field, FPGAs will allow technology workers to build their own adaptable CPUs with the help of specification libraries and automatically constructed compilers for programming and testing of the design. Modern computing has largely ridden on the back of Moore's Law, but soon manufacturing costs and physical limits will stop that advance; at that point, perhaps in 20 years, software programmers will be forced to change their habits and learn to work under processing restrictions. About this time, technologists will start using FPGAs to create CPUs that quickly adapt to the task at hand with the help of software. Software-familiar languages such as Stream-C will allow more people to write for their FPGA-based platforms, and FPGA-based CPUs will tap hardware libraries for application optimization, allowing an entire computer to morph into an MPEG encoder and back, for example. These flexible platforms will become even more powerful when coupled with artificial intelligence, and will make fast general-purpose CPUs obsolete.
    Click Here to View Full Article

  • "IU's CAVE Offers Virtual Reality"
    Indiana Daily Student (03/02/04); Zennie, Michael

    Computer science majors are not the only students at Indiana University taking advantage of the high-tech Cave Automated Virtual Environment inside Lindley Hall. Studio art majors Jackie Nykiel and Lisa Reinwald used the CAVE to design a replica of an ancient Buddhist temple surrounded by a lake. With serene Oriental music playing in the background, the virtual reality environment allowed Nykiel to leap high into the air, above the massive temple, as if she was in "Crouching Tiger, Hidden Dragon." In addition to serving research purposes, the CAVE offers applications that allow it to be used by business, medicine, chemistry, interior design, theater, and drama majors, says Dimitrij Hmeljak, an analyst and programmer with the Advanced Visualization Laboratory. The CAVE consists of three projector screens and a reflective floor, onto to which images are projected and combined to create the fully-immersive environment. Users control movement in the CAVE with a "wand" that acts as a three-dimensional mouse, and the movement of the wand and the user's 3-D goggles help the tracking system to adjust to the perspective of images. Users gain the illusion of "walking around" in the environment as they turn their head or body and images rotate on the screen. As a result of the CAVE, art displays designed on PCs can be viewed in fully-immersed 3-D environments, rather than on PC screens.
    Click Here to View Full Article

  • "The Nuclear Weapon of Digital Rights Law"
    PC Magazine (02/27/04); Rupley, Sebastian

    The European Union Directive for the Enforcement of Intellectual Property Rights has raised the ire of a number of civil liberties groups, which describe the draft legislation as going way beyond the controversial Digital Millennium Copyright Act in the United States. The Electronic Frontier Foundation and IP Justice are among the groups that are opposing the directive. According to an advisory from IP Justice, an international civil liberties organization, the European Union directive would impact any intellectual property infringement, including actions that are minor, unintentional, and non-commercial such as P2P file-sharing. "If you make a copy of a CD and give it to your mother, there are provisions within this directive for recording industry officials to raid your house, and there are similar provisions for doing things like freezing your bank account before there is any kind of hearing," says Robin D. Gross, executive director of IP Justice. Critics consider the directive to be the "nuclear weapons of IP law enforcement." The European Plenary is scheduled to take up the draft legislation March 8 through March 11. The directive has been fast-tracked on a 'First Reading' by its Rapporteur, French MEP Madame Janelly Fourtou, who is the wife of the CEO of Vivendi-Universal.
    Click Here to View Full Article

  • "What Tomorrow May Bring"
    Federal Computer Week (02/23/04) Vol. 18, No. 4, P. S13; Joch, Alan

    The U.S. government is investing in pattern recognition software as a tool for anticipating and stopping terrorist incidents by organizing vast amounts of raw, unstructured data--email, voice transcripts, surveillance photos, etc.--into a minable model. Examples include an expansion of the FBI's Secure Collaborative Operational Prototype Environment, which FBI representative Paul Bresson calls an attempt to extract patterns and relationships from data that could help prevent terrorist attacks using commercially available tools. The Air Force's Project Eyes utilizes the Web-enabled Temporal Analysis System from the Air Force Research Laboratory, Northrop Grumman Information Technology, and Intelligent Software Solutions to analyze data collected by cameras on drone aircraft. Building successful pattern-matching algorithms is a multidisciplinary effort combining statistical analysis, semantics research, and other areas. Some analysts point out that pattern recognition technology has a long way to go before it is mature: Scott Weidman of the National Academy of Sciences notes that the usefulness of certain algorithms is lost when the amount of information they must deal with is raised from megabyte to petabyte levels. Meanwhile, Kofax Image Products CTO Sameer Samat remarks that pattern recognition software's ease-of-use must be improved. Another short-term goal is to make the technology more capable of processing intelligence stored in multiple languages. Samat says the general perception of pattern recognition has improved thanks to the promotion of the technology as a tool that "doesn't replace the need for humans; it makes the intelligence community more effective by helping it to refocus on higher-priority activities."
    Click Here to View Full Article

  • "Software and the Future of Programming Languages"
    Science (02/27/04) Vol. 303, No. 5662, P. 1331; Aho, Alfred V.

    Few people truly understand the scale and reach of software or the programming languages that support it; but such understanding is critical to addressing security, maintenance, and functionality issues that will grow in complexity as the embedded software base expands. It is estimated that the global software investment runs in the trillions of dollars, while the size of the embedded software base indicates that there could be between 5 million and 50 billion defective lines of code currently lying dormant. For this reason, testing for software reliability is the goal of many software research projects. Programming languages have evolved over time: The first machine languages were complex and took a long time to write, while assembly languages invented in the 1950s were more human-friendly; additional ease, speed, and conciseness was incorporated into the higher-level Fortran, Cobol, and Lisp languages in the latter half of the 1950s; the 1960s were notable for the development of very influential languages whose presence is still felt in the most popular modern-day software, such as object-oriented programming language; the 1970s saw the emergence of C and C++, while highly popular contemporary languages are scripting languages such as javascript, php, and perl. The latest batch of major programming languages is influenced by C and C++, but adds features based on the concept that the information infrastructure must be robust and diffuse, and that relies on design principles such as simplicity, hardiness, portability, Internet compatibility, and concurrency. Programming languages in use today number in the thousands or possibly tens of thousands, and there is at least one primary preferred language for every field. There are also compelling cases for programming languages under development that incorporate speech, pictures, gestures, and templates. Increasing software reliability remains a difficult challenge, given that there is insufficient understanding and modeling of the human element in software development; under investigation are methods to build more reliable software systems while taking the human factor into account, examples of which include static type checking, model checking, and self-correcting systems.

  • "Sensors on the March"
    Computerworld (03/01/04) Vol. 32, No. 9, P. 28; King, Julia

    Sensor technology has made enormous strides recently thanks to power supply and programmability breakthroughs, and further milestones on the horizon will help make sensors and sensor networks even more efficient, flexible, and interoperable. So that sensors can conserve battery power, a Navy-funded project at MicroStrain is focusing on piezoelectric materials that produce electricity in response to physical stressors, such as vibrations. Another power-saving approach under investigation is the refinement of software that allows sensors to transmit summary information instead of a continuous flow of raw data. Meanwhile, Feng Zhao at the Palo Alto Research Center is working on an "information-driven sensor-querying" algorithm that programs sensors to collate and send data according to its applicability. Zhao explains, "What we're building is distributed attention for sensor networks. It's the ability to shift and focus attention when new stimuli of interest emerge." The TinyOS open-source operating system developed by Intel with the help of researchers at the University of California, Berkeley, allows sensors and sensor networks to furnish summaries of data or tabulated information in real time, which is then stored in the TinyDB database. The Open GIS Consortium is developing standards that will enable sensor networks to exchange data and be directed and queried by remote control. "A vision for the future is more autonomous sensor webs that can act on their own and communicate," notes Michael Botts of the University of Alabama in Huntsville, chief developer of the Sensor Model Language.
    Click Here to View Full Article

  • "Rekindle the Fire"
    Scientific Computing & Instrumentation (02/04) Vol. 21, No. 3, P. 14; Studham, Scott

    Scott Studham of Pacific Northwest National Laboratory writes that cost-effective commodity supercomputing clusters are effective tools for carrying out "first-principles simulations" of such things as weather systems and basic science, which are based on fundamental physical understanding or mathematical models. However, a different type of high-performance computer is required to solve "dynamic network problems" that are subject to constant change, and whose solutions are determined by finding relationships within the data itself. Examples of such chaotic models include human social networks, biochemical pathways inside a cell, and the U.S. power grid. Studham thinks the computer industry is mature enough to develop heuristic supercomputers and computer science that can generate solutions by devising networks of inter-data relationships, although this will be a formidable challenge. Meeting this challenge "will require new architectures and computer science and mathematics that will allow a vast amount of data to exist in one location and to have dynamic processing elements extract knowledge from the data, or coexist with the data," the author points out. Among the emerging technologies Studham believes could help enable this new computing model are Field Programmable Gate Arrays that support self-reconfiguring systems; arithmetic units being embedded directly in memory; and computer architectures and chips that can handle petabytes of memory on a global scale. Studham concludes, "the time has come for the high-performance computing industry to rekindle the no-holds-barred 'we will solve the hardest problems' attitude that seems to have died with Seymour Cray and to take on the challenge to build new computer architectures and mathematical models to understand dynamic network problems."
    Click Here to View Full Article

  • "A Good First Impression"
    Information Highways (02/04) Vol. 11, No. 2, P. 20; Bowness, Sue

    B.J. Fogg, director of the Stanford University Persuasive Technology lab, has determined through research that the first impression of a Web site's visual design is the single most important factor people use to establish a site's credibility or lack of same. He says this is a disappointing finding, as he hoped his study would verify the conclusion of an earlier report by Consumer WebWatch, which indicated that such things as privacy policies or correction of inaccurate data were key determinants of site credibility. "The credibility of a Web site determines at the end of the day whether people use the Web site or not," notes Fogg, who points out that the same users who are so arbitrary in assigning credibility to a site will reject that site if it is unreliable. The overall conclusion of the study is that a company must possess a well-designed site to create a positive first impression, while another credibility-boosting element is the company's inclusion of certification by trusted parties, corporate brands, and other credentials on the site. For many enterprises, building Web site credibility is a natural outgrowth of the branding practices they followed prior to the advent of the Internet. Stanford has outlined a list of recommendations for establishing Web credibility. Strategic decisions organizations can make include ensuring that the accuracy of information on the site is easy to confirm; providing a physical address to show that the site is for a legitimate company; posting the credentials of corporate staff and affiliations with respected organizations; supplying clear and precise information; regularly upgrading or reviewing site content; avoiding or minimizing advertising and typos; and making the site useful and easy to use.
    Click Here to View Full Article