HomeFeedbackJoinShopSearch
Home

ACM TechNews sponsored by Looking for a NEW vehicle? Discover which ones are right for you from over 250 different makes and models. Your unbiased list of vehicles is based on your preferences and years of consumer input.    Looking for a NEW vehicle? Discover which ones are right for you from over 250 different makes and models. Your unbiased list of vehicles is based on your preferences and years of consumer input.
ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either AutoChoice Advisor or ACM. To send comments, please write to [email protected].
Volume 7, Issue 751:  Monday, February 7, 2005

  • "Opening Doors for Women in Computing"
    CNet (02/07/05); Frauenheim, Ed; Gilbert, Alorie

    Women's shrinking presence in IT has become a major area of focus since Harvard University President Lawrence Summers suggested last month that innate gender differences could partially explain why women are less successful at science and math. Some scholars argue that biology is less influential than stereotypical views of computing jobs as nerdy and male-oriented, while the long hours they often entail can be discouraging for women who want to raise families. Sun Microsystems Distinguished Engineer Radia Perlman notes that women are more susceptible to self-doubt and insecurity than men, and she thinks tech companies should take this into account by making the business culture less cutthroat. Meanwhile, Hewlett-Packard software engineer April Slayden fits the profile of women who are attracted to technology as a vehicle for making a social difference. The National Science Foundation estimates that women accounted for just 28 percent of computer science bachelor's degrees in 2001, down from 37 percent in 1985; meanwhile, the percentage of female IT professionals fell from 33 percent in 1990 to 26 percent in 2002. However, some initiatives to boost those numbers appear to be having a positive effect. For example, female enrollments in Carnegie Mellon University's computer science school have increased significantly since the institution changed its eligibility requirements to place less emphasis on prior programming experience. And UCLA has received grants from Hewlett-Packard to overhaul an introductory course in electrical engineering so that students can use wireless instant messaging to send questions to the instructor during class--a strategy that is less intimidating for shy students.
    Click Here to View Full Article

    To learn more about ACM's Committee on Women in Computing, visit http://www.acm.org/women.

  • "Biologists Turn Text Miners to Dig for Results"
    ITBusiness.ca (02/04/05); Lysecki, Sarah

    Bioinformatics specialists gathered at a workshop held by the Ontario Center for Genomic Computing (OCGC) to discuss how they can improve data-mining of biological research literature. Biology literature is unique in that it is not written to be easily understood, nor is it written by professional writers or people who speak English as their first language, said Queens University's School of Computing assistant professor Hagit Shatkay; biologists also have a specific goal--finding information that helps link genes and diseases. Boolean search is the classic data-mining model using and, or, if, except, and not operators. Similarity search is another approach where search terms are linked with other keywords to help produce more relevant results. Data-mining is not only increasingly important because of the nature of biologists' work, which more frequently involves bioinformatics, but also because more than 40,000 scientific research papers are published each month. It is impossible for scientists to keep up on all the ongoing research in their field, said National Research Council Institute for Information Technology interactive information group leader Joel Martin. A coalition of Canadian government biological research agencies and academia is developing the LitMiner set of tools, which for now serve mostly as an integrated front-end for several existing data-mining tools; LitMiner is currently being tested by a small group of researchers, but could be expanded through partnerships with more organizations. The system relies on open-source software, which makes it more attractive to the scientific community, says Martin. One concern is speed and scalability; LitMiner currently processes complex queries in milliseconds whereas PubMed would take hours for the same task.
    Click Here to View Full Article

  • "UIUC's 640-Node Xserve Cluster Wins on Price, Speed"
    MacCentral (02/04/05); Cohen, Peter

    The University of Illinois at Urbana-Champaign (UIUC) elected to build its new Turing Xserve Cluster supercomputer out of Apple hardware and software because of the advantages they offered in terms of cost, performance, efficiency, support, and compatibility, according to Computational Science and Engineering (CSE) Program director Michael Heath. The approximately $3 million cluster supplants a Linux-based system, and boosts computing power by a factor of 10, says Heath. He also notes that the cluster's power and cooling needs offer a twofold increase in efficiency over competing systems. The 640-node Turing Cluster employs 2 GHz dual-processor Xserve G5s and runs Mac OS X V10.3 Server, which is based on the Unix operating system. Heath says this is advantageous because the learning curve for Mac OS X is significantly shorter than it is for other systems, adding that Apple's support of industry standards and its use of open-source elements was also key to the university's decision. The supercomputer boasts 7 TB of storage thanks to an Apple Xserve RAID system, and Heath reports that a "back of the envelope computation" squeezes out a theoretical performance of around 10 teraflops. Heath says the CSE Program will use the Turing Cluster to support UIUC's computational research initiatives.
    Click Here to View Full Article

  • "Like Linux, Databases Going Open Source"
    Investor's Business Daily (02/07/05) P. A5; Brown, Ken Spencer

    Annual sales for open-source database software will skyrocket from $120 million now to $1 billion by 2008, predicts Forrester Research analyst Noel Yuhanna, forcing commercial vendors to lower their prices over the next few years. He estimates that 35 percent of all open-source database usage will involve critical business functions by 2006. Open-source database applications such as MySQL and PostgreSQL are not only cheap, but rapidly evolvable, and Yuhanna says Linux's success is making such software increasingly palatable to businesses. PostgreSQL developer Pervasive Software will release PostgreSQL for free, and make money by selling support services. Pervasive CEO David Sikora reports that PostgreSQL has several advantages over MySQL: For one thing, its lineage is older than MySQL's, which adds up to more stability and advancement; in addition, PostgreSQL falls under the Berkeley software distribution open-source license, which is considered more flexible than the general public license Linux subscribes to. On the other hand, MySQL boasts more users than PostgreSQL, and it has attained status as one of the LAMP core open-source programs. Other open-source databases, such as Firebird from Borland Software, SAP's MaxDB, Ingres, and Berkeley DB, will compete with MySQL and PostgreSQL. IBM, meanwhile, released its Cloudbase database program to the open-source Apache Software Foundation last year, and experts think the software, renamed Derby, could also be a major rival to PostgreSQL and MySQL.

  • "What's Bugging the High-Tech Car?"
    New York Times (02/06/05) P. SP14; Moran, Tim

    High-end cars have generated a deluge of customer complaints that can be tracked in Internet forums, and experts say the culprit is complex technology that can be difficult to use and leads to software glitches. Often these electronic problems are hard to identify, even with sophisticated diagnostic technology or onboard diagnostic systems. IBM Embedded Systems Lifecycle Management director Meg Self says 32 percent of warranty costs are now attributed to service visits during which no problem can be found, while industry experts say new cars are loaded with so many electronics and software from various vendors that there will soon be a market for automobile systems integrators, similar to what PC brands such as Dell are in the computer industry. Self says IBM is aiming for that market, and that by 2010, cars will have basically the same mechanical systems and be differentiated based on their software, making software integration all the more important. Many of today's software glitches are found in high-end German cars, which have consequently taken hits in quality rankings; as a result, Mercedes-Benz says it is eliminating complicated and unnecessary features across all its models. Not all electronics-related difficulties are the result of poor-quality software: Many times, owners do not know how to properly use or maintain their systems. Owners of Mercedes-Benz cars, for example, should spend some time reading the owners manual as a way to avoid problems, says enthusiast and online forum participant John Robison, who included only the bare minimum of features in his new 2002 Mercedes-Benz C240 in order to reduce complexity. Robison says he asks the dealership to update software components on his car whenever he visits so that he has most up-to-date versions.
    Click Here to View Full Article
    (Articles published within 7 days can be accessed free of charge on this site. After 7 days, a pay-per-article option is available. First-time visitors will need to register.)

  • "Grid Expectations for Networked Computing: From Global Earth Monitoring to Black Hole Detection"
    European Space Agency (02/04/05)

    Current grid computing systems and the probable future development of grid technology and applications were spotlighted at last week's "Grid and e-Collaboration for the Space Community" workshop at the European Space Agency's (ESA) European Space Research Institute (ESRIN) in Frascati, Italy. Among ESRIN's grid projects is an Earth Science Grid-On-Demand Service that allows users to access Earth observation data and rapidly execute data reprocessing for such applications as monitoring icebergs in polar regions; the recently-announced Metropolitan e-Government Application Laboratory project to establish a high-speed data link between Rome and Frascati and connect local research institutions into a metropolitan area network that facilitates e-government services; and the pilot THEmatic Vertical Organizations and Implementation Collaborative Environments study, whose goal is to construct an e-collaboration infrastructure for user groups focusing on Earth observation research, which could also be used to produce prototypes of scientific or value-added products. Other European grid applications of note include the European Commission-funded Enabling Grids for E-SciencE project, whose goal is to erect a transcontinental science grid, and the Italian Agency for New Technologies, Energy, and Environment's 1,400-node, 2,800-processor Grid-it network. Astronomy-related grid projects include a planned Virtual Observatory that stitches together multiple-wavelength inputs into a "virtual sky."
    Datamat's Federico Rossi forecasted that grid technology will migrate from science to industry in a few years, with likely users including the automotive, aerospace, finance, entertainment, and health care sectors. Meanwhile, the ESA has set up a Concurrent Design Facility at the European Space and Technology Center that supports multidisciplinary collaboration committed to the design of future space missions.
    Click Here to View Full Article

  • "Miniaturization Is Key to Computers' Growth, But Parts Sure Are Tiny"
    Wall Street Journal (02/07/05) P. B1; Gomes, Lee

    The miniaturization of computer technology is driving the growth of computers' processing power. For example, disk drive storage has experienced an approximately 10,000 percent improvement over the past two decades. The first disk drive, which was 24 inches in diameter, could only store 2,000 bits of data per square inch, whereas Seagate Technology now sells thumbnail-sized drives that can store more than 100 billion bits per square inch. The continuous miniaturization of computer chips is reportedly due to Moore's Law, the axiom which states that the density of chip transistors doubles every year or so. Moore's Other Law, however, dictates that the manufacturing costs of computer chips are increasing at roughly the same rate, fueling doubts that the disk-drive industry can financially sustain the continued expansion of drive capacity in order to keep pace with engineers' expectations. Furthermore, the heat output of chips rises with additional transistors, but IBM, Intel, and other chipmakers plan to alleviate this problem with dual-core CPUs. Meanwhile, Hewlett-Packard last week announced "crossbar latch" technology that can perform a computer processor's three fundamental operations--AND, OR, and NOT--using two strands of platinum and titanium wire with the thickness of a few dozen atoms. However, H-P researcher Stan Williams notes that commercially practical crossbar latch processors won't be ready for at least five to 10 years and will require approximately 100 billion junctions.

  • "Augmented-Reality Machine Works in Real Time"
    New Scientist (02/03/05); Knight, Will

    Oxford University researchers have developed an augmented-reality system that allows live video footage to be digitally enhanced with computer-generated scenery in real time. The technology enables a computer to construct an accurate environmental simulation in 3D while monitoring the camera's movement. Whereas previous systems have required the addition of markers to a scene to calibrate a computer, the Oxford system can automatically choose its own markers when an object of known size is placed in its line of sight. "The system is very selective about when it looks for landmarks in its environment," notes Oxford researcher Andrew Davison. The computer measures the movement of these markers to determine their distance as well as the speed of the camera's movements. Davison and fellow researcher Ian Reid say the technology could enable more effective robot navigation, a thought echoed by University of Surrey computer vision expert John Illingworth. Other possible uses for the system include virtual house decoration or engineering planning. The U.K. government's Engineering and Physical Sciences Research Council gave the project a $480,000 grant last month.
    Click Here to View Full Article

  • "Toward a Truly Clever Artificial Intelligence"
    University of Reading (02/03/05)

    Dr. James Anderson of the University of Reading's Computer Science Department reports the development of an approach to writing computer programs that could one day be applied to the design and construction of robots whose minds function like those of humans. His "perspective simplex" (Perspex) method involves writing programs as a geometrical configuration instead of a list of instructions, which allows the program to operate in the manner of a neural network so that it continues functioning and developing even when it sustains damage. Anderson explains that a Perspex program serves as a connection between the geometrical structure of the physical world and computational structure, thus offering one answer to the long-term puzzle of how minds relate to physical bodies. In essence, it establishes a model that a robot can employ to define its own body and mind. Anderson says damaged Perspexes exhibit human-like periodic recovery and relapse, leading him to conclude that "a computer program that continues developing despite damaged, erroneous, and lost data means that, in the future, we could have computers that are able to develop their own minds despite, or because of, the rigors of living in the world."
    He explains that a Perspex facilitates the acquisition of global reasoning with a single initial instruction, enabling a program to follow a model similar to human strategic thinking in which a problem is considered in its entirety before numerous details are examined.
    Click Here to View Full Article

  • "Government Must Own the Problem of Supercomputing"
    HPC Wire (02/01/05)

    The High-End Crusader accepts the main argument of the National Research Council's (NRC) "Future of Supercomputing" report that the government must be primarily responsible for the supercomputing problem, but disagrees that low-bandwidth systems will always rule supercomputing applications. He contends that supercomputing's troubles stem from many parallel computing applications' unfulfilled need for high-bandwidth systems, along with market forces that virtually ensure that truly innovative supercomputing applications will be stymied. The author suggests that one of the engines for a sustainable supercomputing future is a diverse supercomputing market in which each user community supports a balanced blend of low-bandwidth and high-bandwidth applications. Aggressive government action and the increasing difficulty for conventional systems to fulfill scientific and industrial customers' supercomputing requirements make this objective realizable. The High-End Crusader agrees with the NRC's argument that federal supercomputing agencies should support the development and maintenance of a supercomputing roadmap, but he faults the committee for speculating a roadmap that specifies possible solutions to supercomputing problems rather than the problems themselves. He suggests that a 2005 supercomputing roadmap might mention such roadblocks as increasing processor performance and parallelism, boosting local and global bandwidth, designing non-interfering parallelism and locality mechanisms, programming high-performance machines, and finding an integrated solution to latency disease as well as the programming-difficulty disease. The High-End Crusader pinpoints a "virtuous cycle" in which government leadership guarantees the availability of high-bandwidth systems by making certain that adequate R&D is carried out on their necessary component technologies. "This makes it possible in principle to increase the supply of high-bandwidth systems," he concludes.
    Click Here to View Full Article

  • "Searching for Context"
    IST Results (02/03/05)

    Researchers in the IST project VICODI have developed a system that is able to define the context of an Internet search according to topic, location, and period. The system makes use of a search engine that is intuitive to context and is based on multilingual input, a knowledge structuring method that is also sensitive to context, and a graphical contextualization interface that makes use of Scalable Vector Graphics (SVG). Edvins Snore, project coordinator, says an inquiry about Sir Isaac Newton would return an article about the scientist cached on the Web site, but also display a map of Europe in his day, with the overall continent in the color white, the United Kingdom in the color red, and varying shades of pink for other countries where similar work was being pursued. "In addition, underlined words that are hypertext links take you not to a standard text about, say, London, but to information about 18th century London, and the scientific context of London during that period," says Snore, of RIDemo in Riga, Latvia. Snore is optimistic about VICODI as a tool organizations can use to enhance a large, highly-structured knowledge portal or database. His company wants to add more data to the system by signing up universities, national libraries, and other content providers. "Eventually, we would be able to supply a complete system, or simply disseminate individual modules that focus on particular subjects," says Snore.
    Click Here to View Full Article

  • "Sizing Up Complex Webs: Close or Far, Many Networks Look the Same"
    Science News (01/29/05) Vol. 167, No. 5, P. 68; Klarreich, Erica

    Researchers have uncovered another similarity in the makeup of complex networks. The Jan. 27 edition of Nature reports that Hernan Makse of the City College of New York and his co-workers have discovered that all blurred networks have connection patterns that are similar to those found on the original network, representing a fractal pattern similar to snowflakes and trees. The networks studied were the World Wide Web, a network of actors who have worked together, networks of proteins with links between those that can connect with one another, and networks of other cellular molecules that have links between molecules that participate in the same biochemical reactions. The researcher used computer analysis to "zoom out" to observe networks from far away, blurring their vision to determine how clusters of nodes were connected. Mathematicians have considered the Web to be infinite dimensional, and have believed such a network could not fit into finite-dimensional space. "They've found something new here, but we don't know yet whether it is a Rosetta stone that will let us translate the mysteries of networks into something we understand," according to Steven Strogatz, a mathematician at Cornell University. University of Notre Dame physicist Albert-Laszlo Barabasi calls the research a "fundamental advance" and answers a question that "has been bugging us for a while."
    Click Here to View Full Article

  • "PHP Consortium Tackles Third-Party Application Security"
    eWeek (02/01/05); Naraine, Ryan

    The newly formed PHP Security Consortium will promote proper documentation, tools, and standards for PHP developers in order to ensure application security. Developers formed the group to support the credibility of the PHP scripting language in the wake of the Santy worm attack, which took advantage of a phpBB message board flaw; PHP was blamed for the problem when in fact the issue was with the third-party PHP-coded application, according to consortium founding member Chris Shiflett. Shiflett says developers use PHP technology due to its low entry barrier but are failing to properly focus on security issues. The consortium's first task is to develop a PHP Security Guide educating developers on the most common security concerns with PHP, and the consortium also plans to conduct audits of third-party applications in order to check for vulnerabilities. The open source scripting language is managed by the Apache-backed PHP Group and has become increasingly popular; it is now shipped with several Web servers.
    Click Here to View Full Article

  • "Open Integration Tools Upgrade Due in March"
    Integration Developers News (01/29/05); McCarthy, Vance

    The 4.0 release of the OpenEAI project's open-source alternative to proprietary ERP/EAI middleware solutions is set for release in March. OpenEAI, which offers a Java-based framework modeled on Apache, is a project of developers at the University of Illinois and is based on the comprehensive enterprise application integration methodology created in 2001 by the university's Enterprise Architecture Group of Administrative IT Services. The 4.0 enhancements will cover such areas as support for XML Schema and key JDKs, as well as improving the suite's testing, visibility, and management. For any enterprise message object, OpenEAI defines an XML-based, open-source messaging protocol for both the request/reply and publish/subscribe messaging models. OpenEAI's analysis also helps to specify which XML messages are required for each enterprise message object. The OpenEAI approach aims to define, expose, and leverage Java, C, and stored procedure APIs for other developers' use--an idea that differs from the practices of many ERP vendors, which often instead offer "message gateways" to get customers to purchase and upgrade. A broad range of integration-enabling technologies is supported by the OpenEAI framework, including templates, business workflow rules, and components for Java, XML, and ERP APIs. Enhancements in the 4.0 version also include support for multiple query objects on a given parent object, support for the use of Java Management Extensions for managing gateways and scheduled apps, and database connection pool enhancements to leverage the JDBC Datasource.
    Click Here to View Full Article

  • "American Society for Information Science and Technology Annual Meeting"
    Information Today (01/05) Vol. 22, No. 1, P. 39; Peek, Robin

    The plenary session featuring Sir Tim Berners-Lee was the top event of the 2004 annual conference of the American Society for Information and Technology (ASIST) in mid November. Focusing on how we can maximize technology, Berners-Lee said the semantic Web could be used to enhance society and history, and that a future "Web of machine-processable data" is possible. Berners-Lee, director of the World Wide Web Consortium, added that "society must have a free and open corpus" that results in creative and scientific commons; key technologies in this vision are Uniform Resource Identifiers and the Resource Description Framework. Meanwhile, at the venue on digital libraries, Jeffrey Pomerantz of the School of Information Studies at the University of North Carolina-Chapel Hill, and Charles McClure of the School of Information Studies at Florida State University said accessing statewide collaborative chat-based reference services is a challenge, considering the low response rate of exit surveys suggests that users are not likely to spend more than a minute filling out a survey after their chat session. Judit Bar-Ilan of the Department of Information Science at Bar-Ilan University in Israel saw blogs as having a professional use in transferring information. System design, especially visual information, was an undertone of a number of sessions as systems unveiled included Cluster BullsEye, which displays the number and identity of engines that return a document, and RankSpiral, which bases its sequential spiral of all documents on total ranking score. As for information organization, a study from researchers at the University of Michigan School of Information found that only about 42 percent of respondents had adopted Encoded Archival Description, and that acceptance is tied to the use of standardized descriptive language. Other researchers found a connection between Web citation and ISI citation, as well as between Web citation count and the impact factor of a journal.

  • "Which Wi-Fi?"
    Network Magazine (01/05) Vol. 20, No. 1, P. 22; Dornan, Andy

    Five Wi-Fi hardware technologies have the potential to radically change networking, but each needs a lot more development in order to fulfill their promise. Most Wi-Fi vendors foresee wired networks being replaced by wireless networks as Wi-Fi's speed and reliability rises to Ethernet levels, but there are two distinct all-wireless network models currently under consideration: One requires a high density of access points (APs) connected by copper, while the other envisions completely wire-free APs that use Wi-Fi to uplink to the core network and cover the last few feet, although bandwidth constraints may limit Wi-Fi's suitability in certain environments. Moreover, all-wireless networks will need devices that support a welter of standards, such as 3G and maybe WiMAX. Next-generation self-configuring APs could eliminate manual site surveys and adapt to the constantly fluctuating radio environment, provided that vendors and standards bodies address interoperability issues. The 802.11n standard, which is expected to be ready by late 2006, will enable Wi-Fi networks to operate at peak speeds of 100 Mbps or higher over a range of at least 100 feet, although a pair of competing proposals--one that adds radio spectrum and one that adds antennas--could impede progress: The additional spectrum scheme is inefficient, while the additional antenna scheme carries extra costs in terms of money and space. Still, there is enough overlap between the proposals to reach a compromise. Location tracking is already an achievable goal, though competing proprietary presence data technologies are a hindrance. Seamless roaming is Wi-Fi's biggest challenge because so many rival vendors and standards bodies must cooperate to establish compatibility between different APs, and for the foreseeable future overlay technologies will be a necessary component of roaming.
    Click Here to View Full Article

  • "The Firefox Explosion"
    Wired (02/05) Vol. 13, No. 2, P. 92; McHugh, Josh

    The Firefox open-source browser has attracted millions of users and advocates for its security, simplicity, and speed--and its success reflects the frustration many people feel toward the shortcomings of Microsoft's proprietary Internet Explorer (IE). Firefox is a breakthrough technology in that it has wide consumer appeal, whereas most open-source projects have been chiefly restricted to tech enthusiasts. Software developers have welcomed Firefox with open arms, creating over 175 extensions thanks to the browser's design for easy add-ons. Another critical driver of the Firefox phenomenon is a huge mobilization initiative orchestrated by Spread Firefox, a community Web site where marketing and recruitment efforts are coordinated, and which engineered a major fundraising campaign. Firefox's security is a key strength, as any vulnerabilities are quickly investigated and patched by the collective effort of the coder community; Microsoft's patching process is painfully slow in comparison. Much of the credit for Firefox's success has gone to Blake Ross, a young programmer who teamed up with former Netscape user interface programmer Dave Hyatt to construct a standalone, paired-down browser free of the "feature creep" that plagued Netscape. Firefox has taken a noticeable bite out of Microsoft's portion of the browser market at a time when Microsoft has basically curtailed further IE development in favor of its next-generation Longhorn operating system, which promises to embed browsing in the desktop. Though Microsoft has not officially changed its strategy, browser updates and security patches are anticipated.
    Click Here to View Full Article

  • "Unnatural Selection"
    Technology Review (02/05) Vol. 108, No. 2, P. 54; Williams, Sam

    Designers are hoping to address engineering problems more efficiently through the use of genetic algorithms (GAs) that breed better designs by following a biological evolutionary model. David Goldberg of the University of Illinois at Urbana-Champaign's Genetic Algorithms Laboratory believes the processing power of modern computers, combined with designers' increasing adeptness with GAs, cultivates the ability to meet both small-scale and large-scale challenges. "By automating some of the heavy lifting of thought, we free ourselves to operate at a higher, more creative level," he says, though the tradeoff is that engineers must delegate still more functions to machines. Goldberg, one of the first researchers to test GAs, learned through trial and error that such algorithms were frequently more complicated than the challenges they attempted to tackle, which caused them to get stuck investigating evolutionary cul-de-sacs and increasingly outrageous solutions; he therefore decided to focus on problems with a spectrum of feasible answers, depending on how they were approached. Stanford University biomedical informatics professor John Koza is going a step further in his explorations of genetic programming, which employs programs that exchange bits of code among each other to improve themselves over time. Such programs have yielded new electronic circuit designs and unique protein sorting methodologies. The breakthrough raises a number of questions, such as whether an invention designed without human assistance is truly an invention, and whether not understanding how an invention works is relevant as long as it works. Koza does not expect humans to be phased out by machine intelligence in the future, but instead sees a symbiotic, complementary relationship between the two.
    Click Here to View Full Article

  • "Experiment and Theory Have a New Partner: Simulation"
    Science & Technology Review (02/05) P. 4; Heller, Arnie; Parker, Ann

    Lawrence Livermore National Laboratory has a longstanding reputation for eagerly exploring the latest supercomputing technologies to support the continued advance of all scientific disciplines and attract some of the world's most talented researchers. Mark Seager with Livermore's Platforms in the Integrated Computing and Communications Department says the linkage between experiment and simulation and theory and simulation is closer than ever. Simulation is a vital component in the design of modern experiments, while the size and resolution of simulations is now sufficient to directly compare their results with experimental outcomes. The National Nuclear Security Administration's (NNSA) Advanced Simulation and Computing (ASC) Program for modeling nuclear detonations, which has led the move toward more realistic simulations, is based at Livermore in the form of ASC White, a scalable parallel supercomputer that performs 12.3 billion operations per second. In July, White will be succeeded by ASC Purple, which will boast a peak speed of 100 trillion operations per second (100 teraops). The institutionally-funded Multiprogrammable and Institutional Computing Initiative (M&IC), which currently uses Linux cluster technology, overlaps with the NNSA-funded ASC Program to the mutual advantage of both efforts, and associate director for Defense Nuclear Technologies Bruce Goodwin expects Linux-cluster technology to help ease researchers' burden for both intensive and routine applications. M&IC and ASC machine simulation has also led to significant breakthroughs in fields that include geology, biology, biotechnology, physics, quantum mechanics, and climate change. Once IBM's 360-teaops BlueGene/L supercomputer is up and running at Livermore, the facility is expected to steer the move toward petascale computing.
    Click Here to View Full Article


 
    [ Archives ]  [ Home ]

 
HOME || ABOUT ACM || MEMBERSHIP || PUBLICATIONS || SPECIAL INTEREST GROUPS (SIGs) || EDUCATION || EVENTS & CONFERENCES || AWARDS || CHAPTERS || COMPUTING & PUBLIC POLICY || PRESSROOM