HomeFeedbackJoinShopSearch
Home

ACM TechNews sponsored by Looking for a NEW vehicle? Discover which ones are right for you from over 250 different makes and models. Your unbiased list of vehicles is based on your preferences and years of consumer input.    Looking for a NEW vehicle? Discover which ones are right for you from over 250 different makes and models. Your unbiased list of vehicles is based on your preferences and years of consumer input.
ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either AutoChoice Advisor or ACM. To send comments, please write to [email protected].
Volume 6, Issue 624:  Monday, March 29, 2004

  • "Gartner: 1/4 of U.S. IT Jobs Offshored by 2010"
    IT Management (03/26/04); Gaudin, Sharon

    One quarter of traditional U.S. IT jobs will have migrated overseas by the end of the decade, predict industry analyst researchers, but this migration may not necessarily entail layoffs of American workers in every case: Some of those positions will be created by American firms setting up shop in emerging countries, says Gartner analyst Diane Morello. Gartner VP Ian Marriott told attendees at a recent European IT conference that "The potential cost advantages [of offshoring] are so persuasive that companies that don't consider it, seriously risk doing their shareholders a disservice." He added that businesses that fail to leverage this trend will endanger their competitive advantage and their ability to expand through innovation. Meanwhile, Forrester Research projects that 3.3 million jobs amounting to $136 billion in wages will be exported over the next 15 years, and Deloitte Research forecasts that 275,000 telecom industry positions will have moved overseas within four years. Outsourcing's chief attraction to businesses is the dramatic wage differences between U.S. IT workers and their counterparts in countries such as India, China, or the Philippines: Kevin Schwartz of E5 Systems points out that an entry-level programmer in the United States earns an average yearly salary of $50,000 to 60,000, while a programmer in China earns only a tenth as much money. Challenger, Gray, & Christmas CEO John Challenger calls Gartner's findings alarmist, arguing that IT jobs will stay local because so many such positions are being generated in small- and medium-sized businesses. He notes that the globalization of the IT workforce may put downward pressure on salaries, claiming that such a scenario is "a more realistic concern than to think that such a huge chunk of IT jobs will disappear in short order."
    Click Here to View Full Article

  • "National Security Spec Advances"
    eWeek (03/29/04); Fisher, Dennis

    This week will mark the unveiling of the Open Specification for Information Sharing (OSIS) by Regional Alliances for Infrastructure and Network Security (RAINS), a consortium of technology companies, critical infrastructure providers, universities, and public agencies. OSIS is designed for the secure, real-time exchange of sensitive data across heterogeneous networks in times of emergency via Web services; it was primarily created for use by first responders, government agencies, and other entities where rapid data-sharing is vital. RAINS officials say the OSIS architecture can be implemented on existing systems, but no single vendor's products will serve as the basis for the framework. "Single-vendor systems give you interoperability but not choice or security," explains RAINS Chairman Charles Jennings. "This is about bringing innovation to the process. It's not just an abstraction." Services RAINS intends to deploy include targeted alert notifications, command-and-control functionality, secure email, and a unified operational perspective. So that Web services can achieve such goals, OSIS is designed to conform to most existing standards, including Security Assertion Markup Language, Web Services Security, Web Services Security Policy, and Web Services Trust. RAINS has been discussing the possibility of bringing the Homeland Security Department into the program, and Jennings expects the department will be involved by year's end. The OSIS architecture has already been deployed in certain areas, and an international implementation is also a possibility.
    Click Here to View Full Article

  • "Congress Moves to Criminalize P2P"
    Wired News (03/26/04); Jardin, Xeni

    Legislation circulating in Congress would significantly stiffen the penalties for illegal file-sharing as well as make it easier for the Justice Department to sue copyright pirates. A draft bill under consideration by the House subcommittee on intellectual property would mean jail time for file-sharers who make available more than 2,500 pieces of content, content that has not yet been commercially released, or even single files determined by a judge to be worth more than $10,000. Sources said Rep. Lamar Smith (R.-Texas) is willing to propose the bill if co-sponsors can be found. House Judiciary Committee representative Jeff Lungren said the draft bill was not authored by committee staff, but was one of the suggested ways to give Justice Department prosecutors more leverage. Separately, Sens. Orrin Hatch (R-Utah) and Patrick Leahy (D-Vt.) introduced the "Protecting Intellectual Rights Against Theft and Expropriation Act of 2004," also known as the Pirate Act; the proposed law lowers the bar for federal prosecutors to litigate against peer-to-peer network users. Hatch said peer-to-peer networks use illegal music, movies, and pornography to lure young people onto their networks, then hold those users for ransom against the entertainment industries. "Unfortunately, piracy and pornography could then become the cornerstones of a 'business model,'" declared Hatch. Both the Recording Industry Association of America and the Motion Picture Association of America are large campaign contributors to Hatch and Leahy. The industry group P2P United protested the draft subcommittee bill and said such legislative measures would not compensate content creators and would unfairly punish a small percentage of file-sharers.
    Click Here to View Full Article

  • "Can India Plug Its Brain Drain?"
    Technology Review (03/24/04); Hariharan, Venkatesh

    The Indian Institutes of Technology (IIT) are setting up startup incubators to encourage the country's best and brightest engineers to stay and make their fortune at home. The list of IIT graduates who fill major positions in the technology industry is impressive, given that just about 3,000 new students are inducted into one of the seven IIT campuses each year. IIT Bombay set up the first incubator three years ago and has since spawned 13 companies, four of which have spun out from IIT; IIT Bombay professor D.B. Phatak says the idea of the incubators is to create more high-value technology business in India--technical innovations as opposed to software services or business process outsourcing. Although the ultimate success of the endeavor is still to be proven, early instances are promising: Vishal Gupta turned down full scholarships at Cornell and the University of Southern California to turn his rules-based event engines into a business software product; his company, Herald Logic, sells software that clients use to monitor key business parameters, such as fund performance at a financial investment firm. Gupta says the IIT Bombay incubator helped in terms of funding and business advise, provided by other incubated companies. He says venture capital is still a challenge, however, since lenders seem more interested in protecting their investment than making the venture successful. Kanwai Rekhi, former Novell CTO and IIT Bombay benefactor, says the success of large Indian technology firms such as Wipro and Infosys have changed the national mindset so that now entrepreneurship is considered a viable career option; the IIT incubation effort and general Indian entrepreneurship plans are getting help from Indians such as Rekhi who have made it big overseas, a phenomenon University of California, Berkeley professor Anna Lee Saxenian calls "brain circulation." Infosys CEO Nandan Nilekani and Cirrus Logic co-founder Suhas Patil are also among the mentors available to the incubator.
    Click Here to View Full Article

  • "Extra Headaches of Securing XML"
    CNet (03/29/04); LaMonica, Martin

    Web services pose a serious security threat to enterprise networks given the lack of understanding about XML among security professionals and the low profile security has played in Web services deployment to date. Current Web services implementations are not much of a threat since companies have used the technology to connect trusted entities such as internal departments and business partners, but as Web services grows in use and applications become more powerful, businesses will have to update their security apparatuses. Hackers infiltrating corporate networks via Web services interfaces could cause much more damage than simply knocking out a Web site since valuable business data is exposed on business-to-business applications, according to Gartner analyst Benoit Lheureux. XML messages are encased in an IP envelope and most current network inspection technology does not filter for fraudulent XML content. Forrester Research analyst Randy Heffner foresees XML denial-of-service attacks where systems are deluged with too many messages, or XML documents are manipulated to slow down the system; evidence that such attacks are possible is already showing up, such as with a recent SecurityFocus security bulletin describing an XML External Entity attack, whereby an incorrectly configured XML "parser" can be exploited to gain network access or take down a network. Limited deployment makes Web services networks a less attractive target for now, but the high level of skill required also serves as a deterrent, according to Sarvega CEO Chris Darby. Sarvega is one of a handful of new companies whose devices improve XML security or speed processing. The Sarvega XML Guardian Security Gateway helps encrypt XML files, enforce authorized access policies, and create network activity logs.
    Click Here to View Full Article

  • "Human Studies Show Feasibility of Brain-Machine Interfaces"
    ScienceDaily (03/24/04)

    With funding from the Defense Advanced Research Projects Agency and the National Institutes of Health, Duke University Medical Center researchers have embarked upon initial human feasibility studies into a way to control external devices by thought via a series of electrodes implanted within a person's brain. The researchers used arrays of 32 microelectrodes to record brain signals from 11 volunteer patients undergoing surgeries to relieve the symptoms of Parkinson's disease and tremor disorders, while the patients played a hand-controlled video game; the recorded brain signals could be used to predict hand movement, which is necessary for the reliable neural control of external devices. The research was co-directed by neurobiologist Miguel Nicolelis, who conducted earlier experiments in which monkeys were trained to operate a robot arm using brain signals. Nicolelis says the electrodes were planted much deeper within the brains of the human subjects than they were in the primate brains. Neurosurgeon Dennis Turner notes that this approach is advantageous for several reasons: Brain signals are theoretically easier to record from subcortical rather than cortical regions of the brain, and there is a greater density of cells to record from in a smaller area. The Duke researchers are now focusing on the development of a prototype "neuroprosthetic" device with a wireless interface. Turner says there are several potential applications for this technology, including a robotic limb for quadriplegic patients, a thought-controlled electric wheelchair, and a neurally-controlled keyboard with text or speech output; such technology would be useful to paralysis victims as well as people with limited speech capability. The researchers have requested federal authorization to perform experimental long-term brain implants within quadriplegics, but they point out that many years of development and clinical trials are necessary before neuroprosthetics become available.
    Click Here to View Full Article

  • "IT Security and Software Development"
    TechNewsWorld (03/26/04); Halperin, David

    As hardware and software proliferates, there is a pressing need to address interoperability and security issues, such as whether the technologies will interoperate reliably under all potential test scenarios. Unfortunately, the number of software combinations that need to work together in a secure manner--and in an environment that faces a rising tide of malware--is nearly limitless. Aberdeen Group VP Jim Hurley explains that it is simply too exhaustive a job for a software supplier to model all possible hacking outcomes. British IT consultant David Quinn thinks part of the interoperability problem stems from the large teams tasked with major applications and operating systems, contending, "You try to set standards and 'middle bits' that everything talks to [in order to] try and cut down the diversity. But you're never going to completely cut it down." Quinn adds that flawed system design concepts are also a major part of the problem, but eliminating them is unlikely to happen because of business imperatives. Mi2g Intelligence Unit executive chairman D.K. Matai says configuration management is responsible for 90 percent of successful hacker attacks, and his suggestion is that, whatever the established security holes are, "the appropriate patches ought to be applied, and the default configurations and services which are running on a particular system ought to be shut off if they are not needed." Matai predicts that more ruthless security measures will be implemented in the future, including: Stricter authentication, such as random passwords that are changed frequently, and a biometric/smartcard combination; the transfer of complex data from a user's computer to an upstream "vault" ensured by a bank-like entity supplying data custody services; and governments and countries either limiting the capabilities of commercially sold computers or requiring users to demonstrate their competence in being more circumspect should their computers be hacked.
    Click Here to View Full Article

  • "Rising to the Software Challenge"
    VNUNet (03/26/04); Hoare, Sir Tony

    A major challenge for scientific research in computing is establishing a connection between the theory of programming and the products of software engineering practice, writes Microsoft Research professor Sir Tony Hoare; he says this task could be accomplished by organizing a grand challenge project to motivate programming scientists to reach such a goal through collaboration and competition. There are many instances throughout scientific history in which the drive toward such a challenge has led to significant breakthroughs, an example in the field of computer science being the development of a chess program that was able to play and eventually trounce an international champion. Hoare comments that similar challenges in computing research remain, one being the construction, assessment, and implementation of a high-level programming language compiler to confirm that every program it compiles performs in the correct manner. The development of such a compiler would allow many existing programming errors to be prevented; the compiler would be built out of technologies already embedded in software engineering tools. Hoare writes that the elements for developing the compiler exist. "We just need some way of weaving them together into a single program development and analysis tool, for use first by researchers, and later by professional programmers," he maintains. Most grand challenges are characterized by broad international participation, rapid and open publication, and the need to solve a fundamental puzzle that resides at the heart of a scientific discipline.
    Click Here to View Full Article

  • "Rules for Effective Source Code Control"
    EarthWeb (03/25/04); Gunderloy, Mike

    Larkware lead developer and author Mike Gunderloy believes every software developer should use some form of source code control, which promotes more harmonious collaboration between team members and provides a secure back-up file of code as well. When choosing the best source code control system, there are a number of factors to assess, including price (some systems cost nothing, others can cost hundreds of dollars per user); concurrent development style; the system's repository (some systems store data in a database or in the file system, which offer varying levels of security and safety); Internet friendliness; IDE integration; and cross-platform support. Gunderloy lists items that developers might wish to consider storing in their source code control system, such as source code, database build scripts, Windows Installer databases, graphical elements such as icons and bitmaps, license files, Readme files, informal development notes, test scripts, build scripts, help files, and documentation. The author notes that storing everything a developer needs in a single repository makes it easier to handle support requests for older versions of products, and adds that developers should remember that copying or removing files from source code control makes it more difficult to re-create previous project versions. Gunderloy recommends that developers change the repository frequently and in small instances, which among things maintains the usefulness of comments in the source code control system. He says labels or tags are useful since people are much better recalling names than numerical codes; labels should be applied to a source code control repository at specific times. Gunderloy also posits that developers should create branches whenever different developers in the same project are working on code that diverges; branching can also be used to investigate alternate coding approaches and their consequences.
    Click Here to View Full Article

  • "Beyond VoIP"
    CIO (03/15/04) Vol. 17, No. 11, P. 102; Villano, Matt

    The Internet2 Technology Evaluation Center at Texas A&M University is working to build next-generation voice-over-IP (VoIP) technologies. Walt Magnussen, director of communications for the university, says the center's Internet2 Voice Over IP Working Group is concerned with issues such as reliability and interoperability, as well as capabilities that should be standard product features. Magnussen, who co-chairs the working group, says new aspects of VoIP also are a focus of his colleagues. For example, Magnussen believes that enterprise customers will use the Session Initiated Protocol (SIP), the open-source signaling protocol developed by the Internet Engineering Task Force, to deliver videoconferencing, remote blackboarding, and other services over VoIP in the future. SIP transfers data faster than current H.323 protocol standards. Meanwhile, the working group has pursued disaster recovery capability after learning that technologists at Columbia University used the Internet2 network and VoIP to make calls on Sept. 11, 2001. "This began some work that looked at the possibility of routing calls around congested telephone networks in disaster situations over reliable packet-based networks," says Magnussen.
    Click Here to View Full Article

  • "The Dating Game Goes Wireless"
    New Scientist (03/20/04) Vol. 181, No. 2439, P. 26; Biever, Celeste

    Researchers at MIT's Media Lab have developed software that would allow cell phones to exchange photos and personal information about singles in bars, cafes, shopping malls, and workplaces who are likely to make a good match. Developed by Nathan Eagle, Pedro Yip, Steve Kannan, and Doochan Han, the Serendipity software is designed to transform online dating services into a more spontaneous, real-time experience. Serendipity makes use of the short-range Bluetooth radio system to connect cell phones to the Internet, and always-on GPRS or 3G so phones can check for Bluetooth signals twice a minute. When the software picks up a signal, it uses the Internet to tell the database which phone it has found, and if the profiles of the singles match it delivers an alert to the individuals. Users are able to set parameters for degrees of separation, such as sending out their profile only to other singles who are friends of friends. The developers believe Serendipity would do wonders for social sites. "Anything that can be done in real time where you interact with a person more quickly will help a lot," acknowledges Craig Newmark, founder of craigslist.org, a popular networking and dating Web site. Although some experts say a rise in such social networking schemes could ultimately prove a distraction, others say mixing technology and relationships is inevitable. Paul Martino, founder of tribe.net, says, "When proximity computing meets social networking, very interesting things will happen."

  • "Government IT Must Consider Privacy, Ethics"
    IDG News Service (03/24/04); Gross, Grant

    The nonprofit group Association for Federal Infraction Resources Management (AFFIRM), which works to improve U.S. government information management, sponsored a panel at the recent FOSE government computing trade show to inspire discussion on privacy and other technology-related ethical issues. The panel of technology experts concluded that government technologists must consider ethical issues when choosing new applications such as biometrics, and AFFIRM is planning a Web site addressing technology and ethics, as well as a white paper later. AFFIRM President Scott Hastings says that some technologies are developing faster than the legislation used to deal with them. As CIO of the Visitor and Immigrant Status Indicator Technology immigration security program, Hastings says he is spending more time considering whether his program should roll out new technologies than he spends thinking about how to implement them. Hastings and SANS Institute director Alan Paller discussed whether IT vendors should be expected to present ethical issues when pitching their products to government buyers, which is not part of most sales training. Paller says that government agencies should address ethical questions during procurement, using specific guidelines, but Hastings wants vendor-backed industry groups to address the issues. Such consideration is necessary if the public is to have confidence using new technologies; privacy is a key concern when filling out government Internet forms for example.
    Click Here to View Full Article

  • "Future Tech: Trends for the Coming Year"
    Washington Technology (03/22/04) Vol. 18, No. 24, P. 37; Grimes, Brad

    Augmented reality systems are being used by the armed forces to train people in dealing with dangerous situations. Unlike virtual reality, in which the user interacts with an artificial environment, augmented reality puts users in a real environment and superimposes digital images over their point of view via special masks or goggles. "This is a cost-effective way to create deadly training scenarios, but without the deadly consequences," explains Harmless Hazards CEO John Ebersole, whose company sold a training system to the Navy to train firefighters, and will implement another tool for the Army Transportation School in Virginia. Meanwhile, software-defined radio (SDR) plays a vital role in the Defense Department's transition to network-centric operations, and law enforcement and first responders are also weighing the benefits of the technology. SDR solutions are designed to circumvent the incompatibility of assorted wireless devices. "Software-defined radio allows agencies to shrink the number of boxes and waveforms to get to the place where anybody can talk to anybody, any application can talk to any application, and any service can talk to any service," notes John Chapin of Vanu. His company sells SDR technology that uses software as the medium for signal processing, enabling commercial wireless devices to function as programmable, compatible radios. SDR technology is a core ingredient of the Defense Department's $5.7 billion Joint Tactical Radio System.
    Click Here to View Full Article

  • "SANs Come Up to Speed"
    Computerworld (03/22/04) Vol. 32, No. 12, P. 30; Mearian, Lucas

    The 10Gbit Fibre Channel products due out by the end of this year are likely to be undercut by a 4Gbit standard that is less expensive and compatible with existing technology. The Fibre Channel Industry Association (FCIA) approved the 4Gbit standard last June before approving the 10Gbit standard in November. United Air Lines senior storage systems architect Gary Pilafas says storage connection speed is not the main issue since application servers are so much slower. Lawrence Berkeley National Laboratory says new 64-bit servers with faster internal buses will eat up additional bandwidth in two years. Eventually, Fibre Channel will have to go to 10Gbit speeds in order to compete with 10Gbit Ethernet, but in the meantime, organizations are likely to adopt the 4Gbit Fibre Channel standard because it works with existing 1Gbit and 2Gbit equipment and costs just $1,000 per port, compared to about $5,000 expected for 10Gbit Fibre Channel ports. IDC analyst Rick Villars sees 4Gbit taking 10 percent of the market within one year of its third quarter roll out; within two years, Villars says 4Gbit will capture 90 percent of the market because of price and performance benefits. Hitachi Data Systems chief technology officer Hubert Yoshida says the 10Gbit Fibre Channel equipment will have a market in high-end SANs, explaining that 10Gbit products can function as interswitch links consolidating many switches in a SAN fabric, and 10Gbit could also enable virtual ports supporting up to 128 users. Even 4Gbit equipment will spur the need for 10Gbit ports, since large enterprises will want the faster equipment for interswitch and connecting internal disk drives to RAID controllers, says Arun Taneja of The Taneja Group. SAN equipment vendors and the FCIA are also beginning work on an 8Gbit Fibre Channel standard due out in 2007 that would be backwards compatible and probably still less expensive than 10Gbit.
    Click Here to View Full Article

  • "Positive Prognosis"
    EDN Magazine (03/18/04) Vol. 49, No. 6, P. 34; Cravotta, Robert

    The increasing digitization of medical data and the incorporation of electronics into medical equipment is spurring a new focus on early diagnosis and prevention of ailments. The growing availability of diagnostic imagery carries with it the danger of medical-data overload, which could be mitigated through data and image processing technologies. One storage solution is data compression, although the best compression techniques to use are being debated in the medical community. Physicians are able to remotely access patient data and collaborate with other specialists thanks to medical-equipment-network awareness and connectivity, although how to view that information remains a challenge; medical practitioners and medical-device makers must work together to map out a general approach for the collection, storage, and transfer of data, as well as the application of computer analysis and the display of patient information. To meet the challenge of legacy medical-system migration, one should consider gradually migrating legacy-software design to a flexible, connection-conscious model while supplying tools and methods to customers so they can sustain backward-compatibility with their medical devices and patient data. Another growing trend is medical equipment's support for noninvasive and minimally invasive diagnostic procedures, which the semiconductor- and engineering-design industry is taking advantage of to land customers; shrinking noninvasive devices to the smallest possible size is more important than reducing their cost. Interest is growing around wearable sensors because of the importance of prolonged patient monitoring, particularly when patients suffer from ailments that may require immediate attention. Such equipment offers more accurate analysis of a person's health than periodic hospital or office visits, and the technology may become an important preventative measure as life expectancies climb and more chronic diseases announce themselves.
    Click Here to View Full Article

  • "Bigger & Better"
    InformationWeek (03/22/04) No. 981, P. 34; Whiting, Rick; Ricadela, Aaron; McGee, Marianne Kolbasuk

    AT&T, Land Registry, Boeing, Experian Information Solutions, and the U.S. Defense Department are pushing the envelope in terms of database technologies. AT&T has compiled two years of telephone records into a 26.3 TB data warehouse used by 3,000 staff for marketing analysis, pricing calculations, and the detection of fraudulent calling-card use, among other things; thanks to the warehouse, AT&T analysts can retrieve all calls made to a country from a particular area code in a specific month within 60 seconds, whereas in the past such a task took four to six weeks. The usage of call records is strictly controlled by the company, and data is partitioned so that workers can only access information relevant to their jobs. All of England and Wales' land-ownership records, including those dating back to the 18th century, are aggregated within the Land Registry's 20 TB transactional database, which has expanded significantly with the addition of maps and images of historical documents. The database is accessed by some 9,000 employees, and the feasibility of opening up the database to the public is being explored; one of Land Registry's biggest challenges is managing the system, which means constant reconfiguration of the data for maximum efficiency, even as the data is being accessed and upgraded. Boeing's 4 TB database contains all specifications of every aircraft the company manufactures, and processes almost 300 transactions each second. The database is accessed by approximately 24,000 Boeing employees. Experian owns one of the busiest data warehouses in the world: Set up for an unnamed financial-services company, the database is used for direct marketing to consumers and handles 887 simultaneous queries at its peak rate; Experian IT director Hurkan Balkir says tens of terabytes are stored in the database. The Defense Department's data repository/data warehouse is expected to boast an initial projected capacity of 5 petabytes when complete, and eventually swell to 30 to 50 petabytes. It will store medical records for 9 million military personnel; the data repository component will be used by physicians as they treat patients, while the warehouse will be used to compile data for analysis by government researchers.
    Click Here to View Full Article

  • "Spam-Busters"
    Network World (03/22/04) Vol. 21, No. 12, P. 69; Ulfelder, Steve

    Unspam CEO Matthew Prince, one of the top spam fighters in the United States, argues that for spam to be curtailed several things must happen: Technology that can help establish and confirm a sender's identity must be developed, which will allow anti-spam laws to be more enforceable and effective. These laws, Prince contends, must "decrease the cost of tracking down spammers, decrease the cost of bringing a trial, increase the likelihood of success at trial or increase the social benefit from winning a trial." Shlomo Hershkop, a Ph.D. candidate at Columbia University, is amazed that spam has become such a large problem, given that technology intelligent enough to effectively combat it already exists. He also thinks that spam will linger far past Microsoft Chairman Bill Gates' projected mid 2005 deadline. Freelance anti-spam software developer Matt Knox attests that spam is a technical problem that must be remedied with a technical solution, and echoes Bill Gates' optimism that spam's demise is imminent, partly because of improving, easier-to-use spam filters. At the same time, he acknowledges that anti-spam legislation is important, although he is uncomfortable with leveraging the Digital Millennium Copyright Act against spammers. Software developer Terry Sullivan says authentication technologies are being overemphasized as a spam solution: He explains, "Every day users do not make their ham/spam judgment based on the source of the message. They make it based on the content of the message." Sullivan likens the war against spam to the Pacific Theater in World War II, where progress was made in fits and starts; he also notes that there already exist strategies that could effectively derail spam at the cost of email's convenience, and formulating a less brutal solution will be a tough challenge.
    Click Here to View Full Article

  • "What Are You Afraid Of?"
    Washingtonian (03/04) Vol. 39, No. 6, P. 39; Edmonds, Patricia

    Virtual reality (VR) systems are being used to treat eating disorders, post-traumatic stress, phobias, addiction, and other conditions that are often psychological in nature. VR therapy is also being explored to reduce physical discomfort by evoking peaceful or soothing moods within patients. Typical VR patients are immersed within a simulated environment with a helmet providing a 3D image and headphones that broadcast sound; patients can navigate or interact with these worlds using accessories such as a joystick or glove. Georgia Tech computer scientist Larry Hodges and Emory University's Barbara Rothbaum jointly developed a VR program designed to re-create the feeling of ascending or being positioned at a great height as a treatment tool for acrophobics, and parlayed their expertise into Virtually Better, a company that sells therapy software to counselors. VR therapy and conventional in-vivo exposure therapy share the same success rate in treating patients with flying phobias, according to Rothbaum and Samantha Smith of the Walter Reed Army Hospital. Dave Thomas of the National Institutes of Health's National Institute on Drug Abuse (NIDA) reports that VR is effective against phobias because such fears are triggered by "cues in the environment--and with VR you can control the cues in the environment in extremely slight ways." Although some VR therapies strive for realism, others support more whimsical environments designed to detach patients from physical stressors: One example is SnowWorld, a program in which patients play in a stylized icy construct so they are less aware of painful medical procedures. VR programs are being looked into as tools for curing addictions such as smoking by exposing patients to environments filled with objects that evoke their cravings, and gradually desensitizing them to these objects. NIDA's Ro Nemeth-Coslett explains that VR has some disadvantages, including the nausea, dizziness, and eyestrain it can sometimes generate; furthermore, the technology is expensive and not widely available.



    [ Archives ]  [ Home ]

 
HOME || ABOUT ACM || MEMBERSHIP || PUBLICATIONS || SPECIAL INTEREST GROUPS (SIGs) || EDUCATION || EVENTS & CONFERENCES || AWARDS || CHAPTERS || COMPUTING & PUBLIC POLICY || PRESSROOM