HomeFeedbackJoinShopSearch
Home

ACM TechNews sponsored by Looking for a NEW vehicle? Discover which ones are right for you from over 250 different makes and models. Your unbiased list of vehicles is based on your preferences and years of consumer input.    Looking for a NEW vehicle? Discover which ones are right for you from over 250 different makes and models. Your unbiased list of vehicles is based on your preferences and years of consumer input.
ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either AutoChoice Advisor or ACM. To send comments, please write to [email protected].
Volume 6, Issue 606: Friday, February 13, 2004

  • "Is Cyberspace Getting Safer?"
    Medill News Service (02/11/04); Newell, Adrienne

    The Homeland Security Department's National Cyber Security Division (NCSD) is evaluating the progress of cybersecurity over the past year and outlining future security projects. Among the 2003 milestones the NCSD notes is the government's creation of a critical infrastructure information network, an Internet-independent federal communications resource that can be used in the event the Internet and other computer-based communications media are knocked out; NCSD director Amit Yoran reports that his agency has "significantly" widened the scope of the network. Another NCSD watershed is the establishment of the Cyber Interagency Incident Management Group, which brings together different experts to develop preventative cyberattack strategies as well as bolster the government against future cyberspace-based assaults. The NCSD unveiled a National Cyber Alert System in January designed to keep computer users apprised of viruses, worms, and other cyberthreats via email; Yoran notes that millions of computer users have accessed the system's Web site, and says his agency plans to expand the site to increase public awareness of security issues. The NCSD partnered with the private sector in a December 2003 summit to determine areas where cybersecurity needed to be heavily emphasized, such as spreading awareness and providing early warnings about intrusions, but Yoran calls current public-private partnerships to meet these goals "unacceptable," and is calling for additional participation. He adds that his division is forging new public-private collaborations to push for unified security objectives, and is advising software developers to increase their programs' security while making them less buggy and loose. Yoran says that developers are "encouraged [to] adopt...automated technologies that guide and force [them] to produce code with fewer vulnerabilities and fewer bugs."
    Click Here to View Full Article

  • "Software Bug Blamed for Blackout Alarm Failure"
    Associated Press (02/12/04); Jesdanun, Anick

    A Feb. 12 statement from industry officials attributes alarm failures that may have exacerbated last summer's Northeast power outage to a software glitch in the FirstEnergy infrastructure. A joint U.S.-Canadian task force probing the blackout reported in November 2003 that FirstEnergy staffers did not take action that could have contained utility malfunctions because their data-monitoring and alarm computers were inoperative. FirstEnergy's Ralph DiNicola says a software bug was determined to be the cause of the trouble by late October, and insists that the utility has since deployed fixes developed by General Electric, which worked in conjunction with Kema and FirstEnergy to trace the programming error. DiNicola says the malfunctions transpired when multiple systems attempting to access the same data simultaneously were put on hold; the software should have given priority to one system. This led to the retention of data that should have been deleted, which resulted in performance slowdowns, and the backup systems were similarly afflicted. Kema's Joseph Bucciero says the software glitch arose because many abnormal incidents were taking place at the same time. Finding the error involved sifting through a massive amount of code, a process that took weeks, according to DiNicola.
    Click Here to View Full Article

  • "W3C Wraps Up Semantic Web Standards"
    InternetNews.com (02/10/04); Boulton, Clint

    The World Wide Web Consortium (W3C) declared the Resource Definition Framework (RDF) and the OWL Web Ontology Language (OWL) standards for the Semantic Web on Feb. 10. The Semantic Web was envisioned by W3C director Tim Berners-Lee as a tool that uses metadata to embed more meaning in data so that users can carry out more precise searches. That metadata is supplied by the XML-based RDF, which integrates applications from library catalogs and global directories and amasses content and collections of photos, music, and events. The development of domain-specific vocabularies is the chief purpose of OWL, which is designed for applications that have to process the content of data rather than simply present that data to people. Web portals can benefit from OWL, because the language can be used to develop categorization rules to augment searches. In multimedia collections, OWL can be employed to facilitate content-based media searches; it can also prove useful to Web services by providing rights management and access control along with Web service discovery and composition. "This is quite an interesting day for the Semantic Web, which is about providing incrementally powerful capabilities for describing, managing and sharing data on the Web," noted MIT researcher and W3C Semantic Web Activity Lead Eric Miller. He observed that people's attitudes toward metadata and digital asset management have changed in recent years--they are no longer considered afterthoughts. Miller said, "I think almost more social than technical changes have happened in the last few years."
    Click Here to View Full Article

  • "Intel Reports a Research Leap to a Faster Chip"
    New York Times (02/12/04) P. C1; Markoff, John

    Intel has developed a prototype of a high-speed transistor-like device that is able to exploit laser communications, signaling a transformation in the delivery of digital information and entertainment. The silicon-based chip is cheaper and easier to manufacture, and achieves much higher data rates than existing silicon optical modulators. Nature will publish a paper on Thursday describing Intel's effort to build laser communication into conventional computer chips, and Intel will demonstrate the device, which is a component of a communications system, at its annual developer conference in San Francisco next week. The silicon chip is able to encode data onto a light beam that can send more than 2 billion bits a second of digital information, and the optical modulator is able to switch the tiny laser beam of data on and off like an electric current. The breakthrough helps meet the challenge of the "last mile" in building optical fiber communications systems. The advance is expected to transform the performance of digital machines, which will not have to be situated in a single place, and commercial products could be available by the next decade. Alan Huang, a former Bells Labs physicist and founder of Terabit, says the breakthrough could make it possible to have computers everywhere. He says, "Before there were two worlds, computing and communications. Now they will be the same." Intel says the new technology will enable the company to produce optical fiber communications components using existing, and less expensive, silicon-based chip-making processes. Analyst Richard Doherty says the breakthrough means fabrication size for optical communications has been reduced by a factor of 10,000, while Intel President Paul Otellini calls it "yet another step in the path to convergence." Others caution that Intel must still prove it can create commercial products from the technology.
    Click Here to View Full Article
    (Access to this site is free; however, first-time visitors must register.)

  • "Airline Passenger Screening System Faces Delays"
    Computerworld (02/12/04); Verton, Dan

    A Feb. 12 report from the General Accounting Office (GAO) indicates that the testing and deployment of the Transportation Security Administration's (TSA) Computer-Assisted Passenger PreScreening System (CAPPS II) is being held up, which could seriously impact the program's effectiveness. The GAO warns that not only is the TSA straggling in testing the system, but has not fully described the system's intended functions, nor has it finished work on at least seven technical challenges that could impede CAPPS II's final implementation. Homeland Security Department undersecretary for management Janet Hale attributes the slow progress in CAPPS II's system development to the TSA's lack of authorization to test the system using real passenger names and information; testing delays stem from airlines' reticence to share customer data with the government because of privacy issues. Hale says CAPPS II is capable of receiving, cleansing, and formatting data from the Airline Data Interface, as well as executing risk assessments and scoring individual passengers according to those assessments. Meanwhile, consultant and former head of security at Tel Aviv's Ben Gurion Airport Rafi Ron says U.S. airport security could be less effective if authorities concentrate too much on information technology and its applications as a terrorist countermeasure. He advocates a solution in which law enforcement personnel are trained in behavior pattern recognition, which has a very good track record in Israel. Virgin Atlantic North America director of security Victor Anderes adds that IT systems such as biometric technologies could be used to ensure that sensitive areas are inaccessible to unauthorized people, but stresses that interagency communications and information sharing at American airports needs to improve significantly.
    Click Here to View Full Article

  • "No More Scrambled Internet Video or Garbled Audio"
    Newswise (02/12/04)

    Using a three-year, $350,000 Information Technology Research grant from the National Science Foundation, Marwan Krunz of the University of Arizona's Electrical and Computer Engineering department is developing Internet routing software that could allow ISPs to ensure quality of service by supporting real-time services such as videoconferencing on a large scale and at a reasonable cost. Data sent over the Internet is currently split into packets that are transmitted from the source computer to one or more destinations via a system of intermediate routers, and the failure or overloading of these routers can slow transmission, resulting in scrambled video or garbled audio. Past attempts to embed guaranteed service in routing schemes have not worked very well because they rely on routers that must each maintain a massive amount of data, and are thus prone to failure from data overload. The system being developed by Krunz and his team of graduate students involves routers or gateways that do not have to maintain a lot of data, using simple instructions that reside within the packets. The instructions cover how long a packet can take to transmit from source to destination, and how much time is left before it must be delivered; packets with the shortest delay times are prioritized. One system feature are probe packets that determine the quickest transmission route by moving ahead of a connection, while repair algorithms find alternative paths and reroute the data if a server goes down or a traffic bottleneck crops up. Machine-level software carries out these operations; the software must be deployed on thousands of routers, but its backward compatibility permits data to pass through routers that have older software. "We're actually redefining the software down to the kernel in the operating system of the router itself," says Krunz.
    Click Here to View Full Article

  • "Scientists: The Latest Mac Converts"
    E-Commerce Times (02/12/04); Weisman, Robyn

    The Apple Macintosh has become a favorite of the scientific community, and is proving essential to projects ranging from the current NASA Mars mission to bioinformatics to the life sciences. Matt Golombek of NASA's Jet Propulsion Laboratory reports that 90 percent of his colleagues employ Macs, adding that he used a Mac G5 with dual 2 GB processors and 8 GB of RAM to render maps, mosaics, and other kinds of imagery into the Canvas graphics application in order to select appropriate landing sites for the Mars rovers; deciding factors in opting for the Mac included its computational power and ease of use, as well as its exceptional image handling capability. Lynn Nystrom of Virginia Tech explains that her university decided to build a supercomputing cluster out of Power Mac G5s, and later upgrade to Xserve G5s, because of their price-performance ratio. Wolfram Research co-founder Theodore Gray is particularly taken with Apple's Unix-based Mac OS X, which allows researchers to consolidate projects, source code, subroutines, and other stuff; he also notes that Apple's decision to design both the Mac hardware and operating system offers a more comprehensible and refined configuration than usual PC schemes. He argues that the scientific community's recent attraction to the Mac stems from Apple currently being ahead of other vendors in its price-performance ratio, which is a cyclical occurrence, adding that personal preference now plays a major role in researchers' choice of hardware and software configuration since PCs' price advantage has shrunk. International Data analyst Michael Swenson says the Mac's appeal to bioinformatics and chemistry researchers chiefly stems from its ability to easily port open-source applications from Unix and Linux. The BioTeam managing director Stan Gloss says Apple's G4 chip is well-suited for certain bioinformatics applications, while the new G5 architecture has even more advantages for life sciences.
    Click Here to View Full Article

  • "Smart Software Gives Surveillance Eyes a 'Brain'"
    University of Rochester News (02/12/04)

    Computer science laboratory researchers at the University of Rochester have been able to outfit surveillance cameras with a rudimentary software "brain" developed by associate professor of computer science Randal Nelson. The software enables the cameras to look for specific objects, such as a firearm in an airport, or a missing piece of lab equipment, which removes some of the concerns that it is a "Big Brother" tool for invasive surveillance. The software looks for changes within a black-and-white video image, and highlights those changes as it tries to ascertain whether they are caused by the introduction of a new object or the removal of a object that was previously in the scene. The software then takes stock of all the object's colors so that it can zoom in on the object at the operator's request. So that the software can identify specific objects on sight, it is trained to recognize them by studying numerous photos taken from different angles; in this way, a new object will be matched to the software's object image database. "If we can get intelligent machines to stand in for people in observation tasks, we can achieve knowledge about our environment that would otherwise be unaffordable," Nelson explains. The technology has already been licensed to PL E-Communications, which intends to apply it as a camera control technology for security purposes. "We're hoping to make this technology do things that were long thought impossible--making things more secure without the need to have a human operator on hand every second," states PL E-Communications CEO Paul Simpson. The technology could also be employed by unmanned reconnaissance aircraft to monitor terrain for signs of movement for prolonged periods.
    Click Here to View Full Article

  • "Makers Scramble to Put Some Bend in 'Electric Paper'"
    Washington Post (02/12/04) P. E1; Walker, Leslie

    Royal Philips Electronics, Gyricon, and the U.S. Army are just a few of the competitors in a race to build flexible electronic-paper displays, which could finally turn a corner with recent breakthroughs in organic electronics and polymer transistors. Many commercial e-paper products, either already on the market or about to be introduced, employ inflexible glass substrates, although Gyricon's Nicholas K. Sheridon believes that cheap organic circuitry will arrive within five years at the most. Another indication that e-paper is edging closer to reality was the huge turnout at this week's digital paper confab in Phoenix, Ariz., which drew representatives from Dow Corning, Dupont, Eastman Kodak, Philips, and Gyricon. Many e-paper projects can be traced to groundwork laid by Sheridon, who conceived of an electrically controlled device that uses reflected rather than emitted light, consisting of electrically charged plastic balls suspended between two silicon layers. Sheridon's project, which took shape at Xerox's Palo Alto Research Center, was eventually spun off into Gyricon, which is currently concentrating on smart signs that a computer can update wirelessly through radio signals. Sheridon's ultimate vision of a commercially mainstream e-paper product is a display that can be rolled up into a tube that receives content via signals beamed from satellites. A project based at MIT led to the creation of E Ink, which provides Philips' e-paper products with core technology. Though e-paper has made significant progress since Sheridon's pioneering work in the 1970s, there are still major challenges ahead: Kimberly Allen of iSuppli/Stanford Resources reports that e-paper will remain monochrome until the electrically charged balls can be made in color.
    Click Here to View Full Article

  • "Web Users Re-Visit in Steps"
    Technology Research News (02/18/04); Patch, Kimberly

    In a project funded by IBM and the National Science Foundation, scientists at Virginia Polytechnic Institute and State University are studying how people re-find information on the Web in the hopes of developing tools for retrieving Web pages faster on a variety of devices, including desktops and mobile phones. Analysis has revealed that people usually find information they have seen before in two steps, explains Virginia Tech researcher Robert Capra: In the first step, users attempt to return to a specific page they recall or think would be useful, while in the second step they ask particular questions about the data they are trying to relocate. These findings came out of experiments in which six subjects were instructed to find specific information on the Web, taking notes as they did so, and then told to re-find that information about a week later by directing six additional users with access to their notes by phone. Results have demonstrated the two-stage retrieval process in action, as well as the importance of domain information, context, and annotations to aiding revisits. Capra says that in over 75 percent of the re-finding tasks, users hit Web pages they recalled from along their original search routes. "The results are useful in that the study provides another empirical data point to support the assumption that search tasks are done progressively and incrementally, and often involve first locating a source and then navigating that source," concedes Marti Hearst, an associate professor of information management and systems at the University of California at Berkeley. However, she sees flaws in the study: For one, users are informed that they will be asked to re-find information, and the analysis infers that people will direct others to find something in the same fashion that they would by themselves. "It could well be the case that people...break it down into pieces to better help the other person keep track of the different aspects of a task," Hearst suggests.
    Click Here to View Full Article

  • "Could National Security Concerns Slow VoIP?"
    IT Management (02/06/04); Mark, Roy

    The FCC is expected to soon release proposed regulations to ascertain whether telecom rules should apply to Voice over Internet Protocol (VoIP), a critical issue for both the FBI and VoIP providers. The FBI is concerned that, without regulation, VoIP will give criminals a tool to circumvent legal electronic surveillance. But providers do not want to be subject to the same taxes as telecom carriers, and wish to avoid costly systems retrofits needed to comply with edicts such as the Communications Assistance for Law Enforcement Act (CALEA). If the FCC decides VoIP suppliers do not constitute telecom carriers, then CALEA will not apply to them, leaving the FBI to pay millions to implement surveillance technologies such as Carnivore to wiretap the Internet. VoIP providers must walk a fine line to curb expensive regulatory interference with the emerging VoIP industry while simultaneously upholding the spirit of CALEA, which the Department of Justice recently described as "vital to national security, law enforcement and public safety." Free World Dialup founder Jeff Pulver declared that VoIP providers are working to develop wiretap standards and practices, while Vonage CEO Jeffrey Citron said providers face "no immediate technical obstacle" to cooperate with law enforcement on wiretap requests. However, the FBI thinks voluntary efforts alone are insufficient. Citron expressed confidence that most voice-enabled Internet communications will be beyond CALEA's reach.
    Click Here to View Full Article

  • "Acxiom Is Watching You"
    Salon.com (02/10/04); Manjoo, Farhad

    Privacy advocates are aroused with suspicion and anger that the government's CAPPS II airline passenger pre-screening system may have been spawned by retired Army General Wesley Clark's lobbying efforts on behalf of Acxiom, a data management company that conceived of a system for performing instant background checks on air travelers via data-mining. The data Acxiom manages is collected by large businesses such as banks and insurance companies and relates to millions of Americans, and privacy proponents are incensed at what Acxiom's alleged connection to CAPPS II implies, particularly since some view the passenger pre-screening system as a privacy-infringing surveillance tool. Acxiom chief privacy officer Jennifer Barrett maintains that most of the data her firm aggregates is not sold on an individualized basis; however, some is--to companies and federal agencies that want to check people's backgrounds for indications of financial fraud or other crimes. The Transportation Security Administration's Heather Rosenker explains that CAPPS II would require passengers to register their names, addresses, phone numbers, and birthdays when they make reservations, while the computer would determine whether passengers are "rooted in the community" by sending the ID data to commercial databases such as Acxiom's during check-in. Concurrently, the computer would scan federal law enforcement databases to see if passengers are wanted for violent crimes or are on terrorist watch lists. Another company, EagleCheck, has devised a passenger screening system that is reportedly less invasive than CAPPS II, and has tried to get the government interested. The system relies on source databases (state DMVs, for example) rather than commercial database aggregators, obviating the need to build a central database of passenger information; data would be read off travelers' driver's licenses at the airport, which means that airlines would not retain personal information after one has flown.
    Click Here to View Full Article
    (Access to full article available for paying subscribers only.)

  • "Benign Viruses Shine on the Silicon Assembly Line"
    New York Times (02/12/04) P. E8; Eisenberg, Anne

    MIT associate professor of materials science Angela M. Belcher is using a benign virus as a scaffolding on which is grown uniform inorganic crystals that organize into semiconducting nanowires. Belcher says the virus, whose DNA has been reprogrammed to attract specific materials, forms wires of perfectly aligned crystals at room temperature. Once the wires are fully grown, the virus is burned away. William S. Rees of the Georgia Institute of Technology's Molecular Design Institute hails this development as a significant breakthrough for the field of nanoelectronics, which relies on the cheap mass production of basic components. Rees says, "The entire field of nanoelectronics depends upon the ability to mass produce cost-effective components, and she has opened the door to this." IBM Research's director of physical sciences Thomas N. Theis believes Belcher's work "could revolutionize many manufacturing processes by making them less expensive." Belcher has used her technique to generate about 30 inorganic materials with semiconducting or magnetic traits, and she notes that these materials are not self-replicating, thus avoiding the stigma often associated with nanotechnology. Belcher aims to create self-assembling materials that are also capable of self-repair with her method. She will try to commercialize the technology through Semzyme, a company she founded with Evelyn L. Hu, an electrical engineering professor at the University of California at Santa Barbara.
    Click Here to View Full Article
    (Access to this site is free; however, first-time visitors must register.)

  • "UC San Diego Scientists Unveil Pilot Project for Automated Monitoring of Animal Behavior"
    ScienceDaily (02/11/04)

    The goal of UC San Diego's Smart Vivarium Project, which brings together computer vision, artificial intelligence, cameras, sensors, and information technology, is to augment the quality of animal research as well as facilitate better animal health care. Pilot project leader Serge Belongie of UCSD's Jacobs School of Engineering comments that monitoring large numbers of laboratory animals is currently a manual chore, which limits observations and how deeply those observations can be probed. "Continuous monitoring and mining of animal physiological and behavioral data will allow medical researchers to detect subtle patterns expressible only over lengthy longitudinal studies," explains Belongie. "By providing a never-before-available, vivarium-wide collection of continuous animal behavior measurements, this technology could yield major breakthroughs in drug design and medical research, not to mention veterinary science, experimental psychology and animal care." Director of UCSD's Animal Care Program Phil Richter believes the Smart Vivarium will accelerate medical treatment of lab animals by detecting and diagnosing illness sooner, and will also reduce the number of animals needed for scientific research. Working with Belongie on the pilot is UCSD computer science and engineering professor Rajesh Gupta, who is designing a distributed, embedded platform that integrates all functions in a silicon-based device that can be affixed to existing lab cages. The technology employed in the Vivarium could also be used to enhance the treatment of sick animals in veterinary offices, zoos, and agriculture, as well as automate the monitoring of "sentinel"
    animals used to detect the presence of dangerous chemical or biological agents, an important element in modern-day emergency response systems. The pilot project is wholly underwritten by the California Institute of Telecommunications and Information Technology, a joint UCSD and UC Irvine effort.
    Click Here to View Full Article

  • "The Attractions of Technology"
    IEEE Spectrum (02/04/04); Rosenblatt, Alfred

    Engineers and technology professionals like what they are doing, according to a survey of IEEE members conducted by IEEE Spectrum and IEEE-USA; more than 75 percent of respondents said the desire to "invent, build, or design things," as well as to "solve real-world problems," were their main reasons for pursuing a career in technology. Another 19 percent said they wanted to work in technology because it would give them the chance to "have a positive influence on the environment." Some 830 IEEE members responded to the poll, which included student members of the organization. Most engineering and technology professionals knew what they wanted to be before they attended college, and 13 percent said they knew they wanted to be technologists before the age of 10. More than half of the technologists believed mathematics is most important in school, while 5 percent added that interdisciplinary courses such as nanotechnology are valuable. Technology demands computer and information technology skills the most, followed by computational skills and modeling and simulation, and communications skills. About half of the respondents felt comfortable about their field amidst job migration and outsourcing trends, with 48 percent expecting to hold an engineering or technology job in 10 years. While 11 percent still expect to be engineers but in a different field, 41 percent said they would be doing something else or would be retired. Although only 6 percent of the professional respondents were women, 17 percent of college respondents were women, while nearly 60 percent of all respondents said the technology industry in the last 10 years has become more open to minorities and women.
    Click Here to View Full Article

  • "Grids in the Enterprise"
    eWeek (02/09/04) Vol. 21, No. 6, P. 49; Coffee, Peter

    The enterprise case for grid computing is stronger than ever with the convergence of commodity components, open-source system software, and germinating bandwidth, combined with approximately 40 years of development culminating in the January announcement of the WS-Resource framework. The real challenge may now lie in overcoming enterprise professionals' reluctance to embrace grid computing out of skepticism that the technology can truly make everyday tasks more efficient and cost-effective. Grids promise the easy delivery of on-demand computing and the elimination of enterprises' reliance on business units' capital spending to determine how computing power is rationed. Care must be taken to distinguish grids from supercomputers, clusters, server farms, and peer-to-peer schemes through heterogeneity, reliability, and flexibility: Grids must have less centralized control than clusters or server farms, and employ general-purpose protocols; they should possess dynamic configurability and afford quality-of-service maintenance; and both bandwidth and processor architectures must be diverse. Enterprise users should not be asking whether a grid is best suited for individual tasks, asserts Sun Microsystems VP Shahin Khan--"The question is how you look at the packaging of the compute capability to fit the latency and bandwidth requirements of the application," he says. Grid computing is no longer being driven by the need to solve complex problems regardless of costs, which has kept grids confined to mostly academic projects, but is now motivated by the commoditization of the hardware and software that knits the grids together. A grid constructed from x86 blade servers running Linux-based operating systems offers a persuasive price/performance scheme. Adding weight to grids' relevance to the enterprise is a growing need for high-performance computing to make sense of increasing amounts of business data.
    Click Here to View Full Article

  • "Coming Soon to Your IM Client: Spim"
    Network World (02/09/04) Vol. 21, No. 6, P. 30; Garretson, Cara

    Instant-messaging spam (spim) may not be as widespread as email spam, but experts believe spim could become just as problematic as junk email as IM proliferates throughout the corporate sector: Analyst Sara Radicati estimates that IM is used as a corporate service by 26 percent of companies, while 44 percent say their workers employ IM. The most popular IM services are offered for free, which means that spammers only need a list of screen names to deluge these systems with spam. In addition to consuming network sources and hurting productivity, spim could exacerbate workplace tensions by posting pornographic or other objectionable content on employees' screens. The most apparent spim countermeasure is to block incoming messages from unknown senders, but users who depend on IM for communications could miss important messages. Some of the top IM service providers downplay the spim threat--Yahoo! Messenger's Lisa Pollock Mann reports that less than 2 percent of the traffic Yahoo! Messenger processes is spim, while security measures such as IM network monitoring and Yahoo! IDs to authenticate senders fortify the service against spamming. The past year has seen the emergence of new anti-spim software and services: IMlogic, Zone Labs, and Sybari offer spim-filtering software, while end-to-end encryption and message archiving for regulatory documentation are some of the extra features included in such products. CipherTrust, Brightmail, and other anti-spam filter providers are also looking into ways of tackling the spim problem in the hopes that the additional layer of security will make IM more palatable to companies as a communications tool.
    Click Here to View Full Article

  • "Virtual Nanotech"
    Science News (02/07/04) Vol. 165, No. 6, P. 87; Goho, Alexandra

    Nanotechnology researchers are taking advantage of increasing computer power to simulate nanoscale materials, structures, and devices to test their properties, optimal configurations, and practical uses before they are fabricated, thus saving a lot of physical trial and error. For instance, Deepak Srivastava at NASA's Ames Research Center modeled carbon nanotubes in the late 1990s, determining through computer simulation that they could be connected into a nanoscale switch; his theories were borne out by an actual nanotube transistor created by researchers several years later. Virtual nanotech is proving especially relevant to the development of new energy sources: Stanford University researchers led by Kyeongjae Cho are employing computer modeling to analyze the atomic structure and properties of platinum in the hopes that a similar but cheaper and more abundant material can serve as a catalyst in a hydrogen fuel cell. The researchers have also measured carbon nanotubes' ability to store hydrogen, calculating the optimum nanotube size for such a function. Atomic-scale computer models are also being used to design and refine sophisticated nanomachines; "We can create these modifications much faster on the computer to figure out which ones to test in the lab," notes Mario Blanco at the California Institute of Technology's Materials and Process Simulation Center. Furthermore, the function of experimental nanostructures can be studied via simulation--for example, Blanco and colleagues were able to confirm that Fraser Stoddart's rotaxane-based electronic switches indeed behaved as they were supposed to through computer modeling. Modeling the behavior of vast numbers of atoms and often electrons requires a huge calculative effort that eats up precious time, so researchers approximate atomic behavior using multiscale modeling. Such approximation is also being used to more easily simulate models as they behave over time as well as at different sizes.
    Click Here to View Full Article

  • "AI in Computer Games"
    Queue (02/04) Vol. 1, No. 10, P. 58; Nareyek, Alexander

    Artificial intelligence (AI) technology used in computer games is distinct from AI used in academic projects, since the academic AI approach consumes too many resources for game development, refinement, and testing; in addition, most AI research carries few practical benefits for gaming, but the chasm between academic and game AI research should narrow as computer gaming's economic value increases. AI is often one of the last ingredients to be added to games because it is highly dependent on the concrete details of the game environment, and it has a low priority from a marketing perspective. Polls show that developers can burn more and more CPU cycles on AI computation--possibly because the speed of graphics cards is outpacing that of the CPU--which clears the ground for more advanced game AI. AI is primarily employed in games to guide the actions of nonplayer characters (NPCs) in order to achieve the most believable and entertaining outcomes, rather than beat the player. A tough job for game AI is making the NPC navigate throughout the game environment in an intelligent manner: Tools designed to provide such capabilities include the A* algorithm, which computes long-distance routes between connected waypoints, and steering methods used to execute movements between these waypoints and govern collision avoidance, formation maneuvers with team/group units, and other functions. Finite state machines and decision trees are conceptual tools NPCs use to make strategic decisions--such as attacking a player or fleeing when they suffer damage--that can be realized by simple if-then statements. Generalizing the numerous approaches to game AI is difficult, since AI components are usually customized to particular games and scenarios, while interfaces used to and from AI components also vary between games, which complicates AI integration; standardizing interfaces to promote AI component reuse in games is the goal of the Artificial Intelligence Interface Standards Workshop. As AI technology grows more sophisticated, a need for third-party middleware will arise, while AI functionality must be made accessible to game artists/designers.



    [ Archives ]  [ Home ]

 
HOME || ABOUT ACM || MEMBERSHIP || PUBLICATIONS || SPECIAL INTEREST GROUPS (SIGs) || EDUCATION || EVENTS & CONFERENCES || AWARDS || CHAPTERS || COMPUTING & PUBLIC POLICY || PRESSROOM