HomeFeedbackJoinShopSearch
Home

ACM TechNews sponsored by Parasoft Logo    Automated Software Analysis - by Parasoft:
Error prevention for unit testing, coding standards analysis,
functional/performance testing, monitoring and reporting.
http://www.parasoft.com/products


ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either Parasoft or ACM. To send comments, please write to [email protected].
Volume 6, Issue 714:  Wednesday, November 3, 2004

  • "E-Vote Technology at Center Stage Amid Election Hype, Hysteria"
    Computerworld (11/02/04); Verton, Dan; Weiss, Todd; Roberts, Paul

    Various independent monitoring groups received unverified reports of electronic voting machine malfunctions on Nov. 2, but the systems' actual performance may not be locked down for several days, according to officials. The online Election Incident Reporting System noted 635 professed incidents of e-voting machine problems across the nation--the bulk in Pennsylvania and New York--as of 4:15 p.m. EST yesterday; 86 incidents of improperly working systems were reported in Philadelphia, although none of those reports has been independently substantiated. Pennsylvania Republican party officials threatened litigation in response to rumors that Philadelphia e-voting machines were displaying vote totals prior to the start of counting, but state deputy secretary for regulatory programs Kenneth Rapp said these rumors were the result of observers misinterpreting a vote counter. Unidentified problems reportedly took down all but one of the Advanced Voting Solutions WinVote touch-screen systems in a Virginia precinct with heavy voter turnout. Other reports included the failure of Danaher Controls ELECTronic 1242 machines in Columbus, Ohio, to boot up properly due to overcharged batteries; some 200 complaints of e-voting problems in Louisiana, including Election Systems & Software iVotronic systems that confused nonprovisional with provisional ballots because of improper formatting by officials; and incidents in Ohio, New York, California, and Florida where the number of e-voting systems was not enough to efficiently accommodate a high voter turnout, resulting in unusually long wait times. Most Ohio, Pennsylvania, Michigan, and Wyoming election officials claimed not to have received any reports of voting system malfunctions. However, Princeton professor Edward Felton posted a report online describing lax e-voting machine security he witnessed at a Princeton, N.J., polling station.
    Click Here to View Full Article

    For information regarding ACM's e-voting activities, visit http://www.acm.org/usacm.

  • "Women Connect to Create Technology, Social Impact"
    Chicago Sun-Times (11/03/04); Guy, Sandra

    Eight hundred people participated in the Grace Hopper Celebration Of Women in Computing held in Chicago last month and hosted by the ACM and the Anita Borg Institute for Women and Technology in early October. Among the participants was Abbott Laboratories engineer Leah Boone, whose team worked with the late Borg to make tech careers more attractive to women by cultivating a perception of technology as creative and collaborative rather than socially isolating. Her team speculated on ways to tailor technology to 9- to 13-year-old girls, and among its conclusions was that "girls like collaborative games, instead of games where they work against each other," notes Boone. This research helped lead to eArt, a Web-based program that enabled girls to create their own art and publish it online, and allowed other girls to build a collage by adding their art to the same page. The Chicago conference honored Karen Banks, coordinator of the Association for Progressive Communications Women's Networking Support Program, which involves over 160 women from 40 nations; one project carried out under the aegis of the program allowed women in the African nation of Kenya to transmit short-text messages on their mobile phones to mobilize support for the addition of a women's rights protocol to an African charter on human rights. Members of Banks' association make up a network of ISPs, Web content producers, human-rights organizations, and individuals who lobby for women's rights around the world. The association also educates women in basic computer skills. Anita Borg Institute CEO Telle Whitney declared at the conference, "We need people who have a holistic view of life to design the technology of the future."
    Click Here to View Full Article

  • "NASA Technology Featured at National Computer Conference"
    NASA News (11/01/04)

    The ACM Conference of High Performance Computing, Networking and Storage (SC2004) on Nov. 6-12, whose theme this year is "Bridging Communities," will spotlight NASA's Columbia supercomputer. Walter Brooks of the Advanced Supercomputing Division at NASA's Ames Research Center says, "Major analysis in space and Earth science, as well as aeronautics and space operations, is underway, demonstrating we have both the capacity and the capability to accelerate all four NASA missions." The Columbia cluster is comprised of 20 Kalpana-based Altix supercomputers containing 512 processors each. SC2004 attendants will learn about research at five NASA field centers, including the design of the current record-holder for fastest air-breathing aircraft, whose precedent-setting flight was aided by high-performance computing. Also to be showcased at the conference is a computational framework for design and analysis of a liquid rocket engine's fuel supply system, with a focus on analysis results and performance data of the simulation runs on the NASA supercomputer. Such experiments are expected to yield revolutionary nanophase thermal and structural composite materials that could play a key role in future manned and unmanned space exploration missions. Another project is using the Columbia cluster to run the Finite Volume General Circulation Model, a climate simulation whose next-generation iteration includes a broader range of remote sensing data taken from orbiting satellites, which promises to significantly improve the accuracy of severe weather forecasts. SC2004, located at http://www.sc-conference.org/sc2004/, is sponsored by the ACM Special Interest Group for Architecture and the Institute of Electrical and Electronics Engineering Computer Society.
    Click Here to View Full Article

    For more on SC2004, visit http://www.sc-conference.org/sc2004/.

  • "RFID Rights"
    Technology Review (11/03/04); Garfinkel, Simson

    Despite the early concerns of radio frequency identification device (RFID) technology developers, companies deploying the technology are paying little attention to consumer privacy issues. An implantable RFID chip recently made big headlines when the FDA approved its use in humans, but less fanfare has accompanied the rollout of RFID in millions of secure access cards and other devices people take for granted. About 40 million Americans currently carry some RFID application in their pockets each day, adding to their convenience and safety; Wal-Mart and other retailers are also going full-bore with their RFID deployments, but have not seriously considered the privacy concerns introduced by MIT's AutoID center, for example. The RFID consumer "Bill of Rights" authored at MIT has been significantly diluted by the EPCglobal organization, which published its own version on its Web site: Whereas original recommendations included no secret RFID tags or readers, the EPCglobal recommendations only require the tags themselves to be announced by packaging. Consumers are not given the right to know how their data will be used--only if they visit a company Web site can they find out who it is being shared with and how long it will be stored. Prototyped RFID-chip "kill boxes" have not been taken up by companies either because of increased cost. Wal-Mart promises that products with RFID chips will be labeled with an EPCglobal symbol, but consumer privacy advocate Katherine Albrecht says the company's early trials with Procter & Gamble show that neither business is taking privacy concerns seriously. Though legislation regulating consumer-space use of RFID has failed in California and Massachusetts, continued negligence on the part of retail companies could give consumer activists a new legislative mandate.
    Click Here to View Full Article

  • "The Big Election Beta Test"
    CNet (11/01/04); Lemos, Robert

    The Nov. 2 presidential election will be a major trial for computerized ballot systems that some 30 percent of registered U.S. voters--45 million Americans--are expected to use, by Election Data Services' estimates. "What happens in the election will dictate the future development of e-voting," says Electiononline.org editor Dan Seligson. One of the most contentious debates surrounding the e-voting issue is whether or not paper ballots should be supplied as backups to ensure the accuracy of votes, especially if recounts are needed in the event of a close race. MIT computer science professor Ted Selker thinks the inclusion of paper ballots as a security measure could exacerbate rather than solve problems: He notes a recent incident in Nevada, a state that has mandated the installation of paper-trail audit features on e-voting systems, in which a polling-place official's attempts to clear a paper jam with a pair of scissors caused several already-cast paper ballots to be damaged. "It is unclear that any technology can prevent someone from screwing up," Selker concludes. Another contributing factor is the Help America Vote Act (HAVA) of 2002, which set standards states must follow if they are to receive federal funding to update their voting infrastructure; the standards stipulate that voting machines must provide a permanent paper record, be accessible to disabled voters, and alert voters of errors. Other HAVA requirements include the setup of provisional voting for citizens who do not show up on voter rolls, authentication of first-time voters' identities, and the creation of a state-wide voter registration database. Legal experts believe that inadequate poll worker training, complicated new election measures, and erroneous voter registration lists are more likely to be cited in post-election lawsuits involving close races than e-voting bugs.
    Click Here to View Full Article

  • "Software-Patent Battle Set to Flare Up"
    ZDNet UK (10/29/04); Wearden, Graeme

    As the European Parliament prepares to resume software patent deliberations, advocates and opponents are arguing their positions. The European Information and Communication Technology Association (EICTA) is lobbying for the institution of software patents, contending that thousands of jobs would be threatened and mass plagiarism would result without them. Critics such as the Foundation for a Free Information Infrastructure's Rufus Pollock counter that freedom from software patents would encourage innovation and bolster the prominence of Europe's tech economy. He maintains that "Innovation and ideas must be 'adequately rewarded' and this is precisely what software patents do not do." The European Council of Ministers has called for software patenting as part of its effort to make patent law universal throughout Europe, but it has met opposition from the European Parliament. The parliament attempted to dilute the council's Directive on the Patentability of Computer-Implemented Inventions last September by adding constraints to software patents, but the council balked at this resolution in May 2004. The parliament's appointment of former French Prime Minister and avowed free software proponent Michel Rocard to lead its response to the council when software patent deliberations begin in November has led to conjecture that the parliament will not soften its stance on the issue.
    Click Here to View Full Article

  • "Heavy Price for Free Speech"
    Wired News (11/02/04); Terdiman, Daniel

    Determining the degree to which players of online role-playing games should be permitted to exercise their rights to free speech and expression was a topic of discussion among a panel of legal and academic experts at the second State of Play conference. Frederick Schauer of Harvard's Kennedy School of Government, Second Life Herald author Peter Ludlow, and Yale Law School professor Jack Balkin contemplated whether massively multiplayer online game developers could be found legally liable for allowing antisocial behavior among players, and what operating policies the developers should establish to shield themselves from such litigation. Schauer disputed the common view among players that speech is an assumed right in virtual worlds, noting that it is predicated on a traditional American perception of liberty that does not align well with game space; he also noted that some game participants hail from countries where free-speech rights are not as rigidly protected as in the United States, especially when confronted with "issues of libel and slander, issues of invasion of privacy, hate speech, racial hatred, incitement to violence and incitement to various forms of crime." Balkin argued that game developers must consider four basic issues in relation to free speech: The possibility of government regulation of free speech; commercial game operators shielding themselves with the First Amendment in an effort to avoid regulation; concerns that in-world speech could fuel disputes and even legal action; and operators' incentive to censor speech and write rules sanctioning censorship into their terms of service. To guard against legal attacks, Balkin proposed interation, a setup in which game companies choose terms of service from among a series of models that are then presented at a game's launch so players would have a clear idea of speech allowances and restrictions.
    Click Here to View Full Article

  • "SGI, IBM Wrest Lead From NEC"
    Investor's Business Daily (11/02/04) P. A4; Brown, Kim Spencer

    The title of world's fastest supercomputer has reportedly been stolen from Japan's NEC Earth Simulator by NASA's Columbia cluster and IBM's Blue Gene/L system. IBM and Silicon Graphics, supplier of the 20 Altix systems comprising Columbia, both claimed that internal tests demonstrated faster speeds than the NEC supercomputer, and IBM supercomputer projects director Dave Turek said the benchmark represents "the reaffirmation of the U.S. computing industry's capabilities in [the supercomputing] space." Though these assertions have yet to be confirmed by the next Top500.org supercomputer list due out next week, early numbers rank SGI's system in first place with 42.7 teraflops, Blue Gene in second place with 36 teraflops, and the Earth Simulator in third with 35.9 teraflops. SGI and IBM also cited their systems' smaller size, ease of use, and lower electricity requirements as advantages over NEC's machine. Both Columbia and Blue Gene employ off-the-shelf technology--Intel's Itanium 2 chips and IBM's Power 5 chips, respectively--as well as the Linux operating system, which allows older system software to be exported to the new systems; the Earth Simulator, by contrast, uses expensive custom-made vector-type chips that are less economical. The IBM and SGI machines use a scalar framework that performs simpler calculations than the NEC system, but advocates say scalar systems' capabilities are catching up to vector chips thanks to new programming methods. The SGI system was built in only 15 weeks for $50 million, while the Earth Simulator took four years and $350 million to construct. The Defense Department aims to introduce the world's first petaflop-class system by the end of the decade, and IBM, Cray, and Sun Microsystems are competing to win a government contract to build the system.

  • "Makeover to Depict Washington as Young, Old and In Between"
    New York Times (11/02/04) P. D3; Leary, Warren E.

    Reconstructing George Washington's visual appearance accurately was the task set before the University of Pittsburgh's Dr. Jeffrey Schwartz by the curators of the first American president's Mount Vernon estate. Schwartz's team is working to meet this challenge by tapping a wealth of information--sculptures, clothing, paintings, written records, etc.--to extract a likely simulacrum of Washington at ages 19, 45, and 57 using state-of-the-art computer imaging methods. Schwartz says a digital image of Washington at 53, drawn from the work of an 18th-century sculptor renowned for his accuracy, will be established as a baseline, and "aged" via computer software to render him as he probably appeared at 57. The next step will involve the use of custom software developed by Arizona State University's Partnership for Research in Spatial Modeling (Prism) to "deconstruct" Washington's image to its 45- and 19-year-old incarnations. The 3D shape of sculpture and other objects associated with Washington's appearance has been captured with portable laser scanning gear used by the Prism group: Laser-light measurements of a bust at Mount Vernon produced a wire-frame rendering that can be matched to other objects to determine consistencies and dissimilarities, and this was followed up by scans of additional statuary, life masks, and a set of Washington's dentures. Paintings and other sources of 2D information are also being digitized for incorporation into the 3D models. Prism team leader Dr. Anshuman Razdan explains that once standard reference points--eye sockets, the bridge of the nose, cheeks, forehead, and so on--have been established, the group's next goal is "to write algorithms to let us manipulate the models, such as inserting dentures into the oral cavity, and see how they affect the shape of the face." The final images will be used to create three life-sized models of the founding father, to be enshrined in a museum exhibit.
    Click Here to View Full Article
    (Articles published within 7 days can be accessed free of charge on this site. After 7 days, a pay-per-article option is available. First-time visitors will need to register.)

  • "Single Field Shapes Quantum Bits"
    Technology Research News (11/10/04); Smalley, Eric

    Using funding from the Defense Advanced Research Projects Agency, the National Science Foundation, and the Army Research Office/Advanced Research and Development Activity, researchers at the University of Toronto and the University of Wisconsin at Madison have devised a scheme to control the individual electrons in a quantum computer using a single magnetic field instead of generating very small magnetic fields for every electron. This concept lets individual electrons function as the quantum bits (qubits) that store and process data, and improves on current global magnetic field schemes requiring qubits to be comprised of multiple electrons. The process taps the interactions of electron pairs to create both the one- and two-qubit gates upon which quantum computing logic is based: One-qubit gates flip the magnetic orientations, or spins, of electrons between 1 and 0, while two-qubit gates entangle electron spins. The researchers' setup places minuscule electrodes near quantum dots that pull nearby electrons close enough to exchange energy and swap spin orientations with sufficient interaction. The scheme meets the formidable challenge of harnessing the interaction to flip the spin of one electron without flipping the spin of the other by outlining 11 gradual steps involving four electron interactions and seven pulses of the global magnetic field. "The main challenge is [achieving a] high degree of control of the exchange interactions," notes Lian-Ao Wu of the University of Toronto. Deployment of this scheme could be accomplished with two square aluminum nanowires 100 nm in diameter separated by an insulating layer, and a parallel row of quantum dots in a zigzag configuration. Wu says nanotechnology developments will play a key role in the ability to construct a quantum computer.
    Click Here to View Full Article

  • "NSF Awards $2.9 Million Grant"
    Cornell Daily Sun (11/01/04); Murabito, Jennifer

    A Cornell University-led consortium has been awarded a $2.9 million IT research grant from the National Science Foundation (NSF) to pursue the study of new ways to improve privacy protection while collecting social science data for research purposes. Current techniques frequently fail to effectively maintain privacy, given how often even anonymous individuals in publicly available databases can be identified using combinations of occupation, income level, age, and geographic area. "We expect to develop a new, and very exciting, collection of public use data products known as 'synthetic data,'" explains Cornell professor and principal investigator John Abowd. "These products permit analysts to study a wide variety of business and household models using microdata but without compromising the confidentiality of the data that were originally provided to the Census Bureau." A considerable portion of the research will concentrate on verifying the legitimacy of these techniques, and also support studies being conducted at nine Census Research Data Centers. Grant funding will also be channeled into research on "coarsening," a method to blend small groups of companies or households into a single record. The multi-institutional NSF Social Data Infrastructure grant is in its final year, and recipients in addition to Cornell include Carnegie-Mellon, the University of Maryland, the Census Bureau, Argonne National Laboratory, Duke University, UCLA, UC-Berkeley, and the University of Michigan. Over $25 million has been contributed to the Cornell-led study since 1999 from donors that include NSF, the Census Bureau, the National Institute of Health, and the Sloan Foundation.
    Click Here to View Full Article

  • "Make It Simple"
    Economist (10/28/04) Vol. 373, No. 8399, P. S3; Kluth, Andreas

    Many computer experts agree that information technology must be simplified, an argument whose urgency is being fueled by a number of trends. Analyst Pip Coburn partially attributes increasing demands for simpler IT to the bursting of the dot-com bubble and the growing need among customers for "cold" technologies to help them integrate and streamline the elaborate "hot" technologies they purchased during the late 1990s. Analyst Steven Milunovich sees the IT industry's evolution unfolding in three 15-year cycles: The first cycle (the mainframe era) was characterized by proprietary technology and small user numbers; the second cycle (the PC era) saw the predominance of de facto standards and a tenfold increase in users; and the nearly-complete third cycle is the Internet era, in which de jure standards are taking the reins and user ranks are supposed to swell tenfold again as every employee is expected to use technology, thus furthering demands for simpler systems. Coburn projects that in a decade, virtually everyone will either be digital natives who have been using instant messaging for so long they cannot conceive of living without it, or digital immigrants--thirty-somethings who adopted technology as young adults--while the shrinking population of tech-phobic "analogs" will need simpler products so they can adopt IT and avoid being socially ostracized. Analysts offer varying estimates of the costs of IT complexity in terms of spending, lost productivity, implementation delays, and failures. IDC's Tony Picardi reports that the percentage of the IT budget firms commit to fixing existing systems has exploded over the last 15 years from 25 percent to between 70 percent and 80 percent, while IT complexity is expected to cost companies $750 billion worldwide in 2004 alone. Meanwhile, the Standish Group finds that 66 percent of all IT projects are doomed to failure or take longer than expected to deploy because of IT complexity.
    Click Here to View Full Article

  • "When Hackers Attack"
    Chronicle of Higher Education (11/05/04) Vol. 51, No. 11, P. A29; Read, Brock

    College and university IT administrators are learning to deal with hacker intrusions aggressively, implementing tough password-change policies and stepping up efforts to educate users. When Purdue University IT officials discovered their network had been hacked by someone using about 100 stolen passwords, they sent email messages about the attack to 60,000 users at the school's main campus; within 24 hours, about 15,000 students, faculty, and employees had changed their passwords at a Web site especially created for that reason. User education is the key to network security at colleges and universities because many people might not know how to choose a good password or might not otherwise pay sufficient attention, says Purdue computer science professor Eugene Spafford, who is also executive director for the Center for Education and Research in Information Assurance and Security. Universities often try to keep their networks as open and accessible as possible, but can take tough measures such as forcing users to choose new passwords or deploying one-time-password-token devices in computer labs. Besides end-user concerns, Purdue officials are also conducting extensive forensics research to try and discover the motive, and possibly the identity, of whoever hacked their system; this investigation can yield important clues as to how to bolster security, and also offers an opportunity for different network stakeholders to discuss the right balance between access and security. In California, university IT officials at California State University at Hayward and the University of California at Berkeley have had state legislation forcing them to take quick action. Overall, large universities are quickly strengthening their defenses against hacker attacks, which makes smaller schools more vulnerable as hackers seek out easier targets.
    Click Here to View Full Article
    (Access to full article is available to paid subscribers only.)

  • "Sony Lab Tips 'Emergent Semantics' to Make Sense of Web"
    EE Times (11/01/04) No. 1345, P. 31; Johnson, R. Colin; Yoshida, Junko

    Sony Computer Science Labs has developed an "emergent semantics" technology that uses software agents to automatically categorize and create ontologies for sharing data on the Web. The technology is an alternative to the Semantic Web technology currently under development at the World Wide Web Consortium: Whereas the Semantic Web requires the explicit coding of metadata, emergent semantics uses human learning methods to study communications between software agents and then self-organize lexicons and ontologies that add meaning and relationships to data; this type of automatic categorization and communication between software agents would ensure all Web data would be included in the next-generation Web. "We need to deal with legacy systems too," says Sony researcher Peter Hanappe. "It's very hard to agree on how to describe certain things as it is, and what needs to be described continues to evolve." The Semantic Web is a top-down technology with an approach to hardwiring artificial intelligence by writing millions of if-then constructs, according to Sony researchers. With emergent semantics, users would be able to categorize and label their music collections any way they like, and still be able to have semantic relationships between other users' music collections in a peer-to-peer relationship, for example. In contrast, the Semantic Web would impose a fixed ontology and taxonomy on people's data collections. Emergent semantics patents were filed in Europe this month and the theoretical models and algorithms are ready, say the researchers.
    Click Here to View Full Article

  • "E-mail at a Crossroads"
    Network World (11/01/04) Vol. 21, No. 44, P. 48; Garretson, Cara

    Epidemic outbreaks of spam, phishing scams, and other abuses are threatening the usability of email and wearing down public confidence in the Internet; efforts to address the problem include initiatives by the Internet community to develop technological countermeasures, U.S. regulators' attempts to thwart email crime using a limited stable of legislative tools, and moves by international groups to shepherd the establishment of multinational anti-spam laws and regulatory agencies. "If you talk to people who use email, certainly within the consumer ranks, they're saying it's too much trouble now, there's too much junk, and it's just too dangerous," notes Sendmail Chairman Greg Olson. Still, few people think that email abuses will cancel the medium's popularity, given how deeply embedded email has become in Americans' home and work life. The general consensus is that such abuses can only be suppressed, not eliminated, so tech firms, industry associations, legislators, and international bodies are focusing on reducing the severity of email's problems. Wide industry appeal is forming around sender authentication, a technological measure for enabling email recipients to confirm the source of a message. Meanwhile, the effectiveness of the only federally approved anti-spam law so far, CAN-SPAM, has been undercut by spammers' ability to hide their tracks and conceal their true identities. Anti-phishing proposals include a bill from Sen. Patrick Leahy (D-Vt.) to make phishing a federal offense that carries a maximum penalty of five years in jail. The ineffectiveness of federal laws against email abusers who operate overseas has spurred international action: Internet regulators at a July conference of the International Telecommunications Union called upon all governments to pass anti-spam legislation, while one month later the Organization for Economic Cooperation and Development set up a task force to watch its member governments' anti-spam efforts and analyze related strategies so that best practices and awareness campaigns can be formulated.
    Click Here to View Full Article

  • "Can Computers Untangle the Neural Net?"
    Scientist (10/25/04) Vol. 18, No. 20, P. 35; Heyman, Karen

    Researchers who model computer prototypes after the brain are not the same as computational neuroscientists, who use computers as a tool to develop predictive simulations of neural architecture and communication that biology cannot support. Many labs are undertaking such research by combining data and theory into practical models of the brains or neural components in a wide variety of organisms. There are two camps of computational neuroscientists: Those who use top-down, conceptual models that contain just enough detail to extract basic principles, and those who employ bottom-up, data-driven models in which a supercomputer is given mathematical interpretations of the individual elements from which it derives complex simulations of individual neurons and neuronal networks. Adherents of the top-down approach argue that the complexity of a bottom-up model would thwart understanding of fundamental principles, while bottom-up advocate Jim Bower of the University of Texas San Antonio contends that "the purpose of many of the simpler models is not to discover underlying principles, but instead to convince others of the plausibility of a preconceived idea or concept." Boston University professor John White says both philosophies are attempts to construct mechanistic models that depend on "leaps of faith." Computational neuroscientists are trying to answer such riddles as the number of times a neuron spikes each second, the patterns generated by those spikes, and the importance of rate codes and temporal codes. "Using information theory, we can measure how much each spike tells us about the outside world, and we can start to dissect the language of the brain," explains Princeton University's William Bialek.
    Click Here to View Full Article
    (Access to this site is free; however, first-time visitors must register.)

  • "Fast Forward"
    Intelligent Enterprise (10/30/04) Vol. 7, No. 16, P. 12; Grimes, Seth

    High-performance computing (HPC) continues to advance in terms of technical capability, but is also deepening its impact on the enterprise by offering new ways to support decision-making. The traditional scientific problems facing HPC--nuclear weapons design, weather system modeling, etc.--continue to eat up new capacity, but the industry is realizing the true benefits of these machines depend on what type of problem they are given and how they are applied. Google's 100,000-node system is well suited for indexing and serving up Web content, but would not be as effective if applied to protein-folding simulation, for example. HPC is playing an important role in normal business functions as well, such as how Procter & Gamble models the manufacture of its Pringles potato chips and their packaging to eliminate defects and inefficiencies. HPC is able to model, simulate, and optimize these types of businesses functions. Decision support is another area where HPC is starting to make its presence felt, allowing fast analysis of huge problems that are comprised of many, simultaneously executable threads. SAS offers technology that helps HPC systems choose the right model, produce millions of possible forecasts, and mine the resulting data to detect patterns and other actionable information; IBM is also moving into this area, having established a new Center for Business Optimization that helps clients use HPC systems for business optimization. HPC is no longer solely about fast computing cycles, but about the addition of software and algorithmic innovations that add greater analytical capabilities to the enterprise.
    Click Here to View Full Article

  • "Enabling Technology Visionaries Share Their Secrets"
    Speech Technology (10/04) Vol. 9, No. 5, P. 22

    Five speech technology enablement visionaries--Brooktrout's Eric Giler, Aculab's Chris Gravett, Eicon's Kipton Heuertz, NMS' Brough Turner, and Intel's Tony Neal-Graves--discuss their companies' strategies, strengths, and prospects and how they relate to the adoption of speech enablement among enterprise customers. Both Giler and Neal-Graves think support for and maturation of standards such as VoiceXML and SALT is vital to spurring enterprise customers toward speech enablement, while Turner recommends more aggressive promotion of speech deployment's return-on-investment and proof points; Heuertz, meanwhile, expects greater voice revenues with the market rollout of more speech-enabled applications from vertically-oriented companies. In arguing why their respective companies offer better choices than the competition, Giler cites Brooktrout's commitment to its relationship with development customers, Gravett emphasizes the combination of Aculab's speech application development products with sturdy technical support, Heuertz touts his company's unparalleled installation and support tools, Turner flaunts NMS' speech technology experience along with price performance leadership and easy-to-use products, and Neal-Graves promotes Intel's price/performance, technology leadership, robust ecosystem, and strong presence. Each visionary details how his company is easing the implementation of speech for enterprise clients: Giler says Brooktrout is leveraging its products, its reseller channel, and its support of standard platforms such as Microsoft's Speech Server; Heuertz cites Eicon's Diva server product line and Software Development Kit, which respectively support easy installation and troubleshooting and efficient application building via application programming interfaces. Neal-Graves says Intel is effecting the migration from proprietary communications solutions to systems comprised of modular components and standard interfaces, while Turner notes that NMS offerings such as the Open Access speech application development environment boost the efficiency and speed of application development.
    Click Here to View Full Article


 
    [ Archives ]  [ Home ]

 
HOME || ABOUT ACM || MEMBERSHIP || PUBLICATIONS || SPECIAL INTEREST GROUPS (SIGs) || EDUCATION || EVENTS & CONFERENCES || AWARDS || CHAPTERS || COMPUTING & PUBLIC POLICY || PRESSROOM