Volume 5, Issue 490: Friday, May 2, 2003
- "What's Spam By Any Other Name?"
IDG News Service (05/01/03); Gross, Grant
The first day of the FTC's three-day spam conference in Washington was marked by disagreement among email experts over what actually constitutes spam: Anti-spam advocates and certain companies defined all unsolicited bulk email as spam, while some email marketers only wanted messages with false subject lines or deceptive headers to be categorized as spam. "Unsolicited and bulk' may not be the best definition for a law," acknowledged SpamCon Foundation President Laura Atkins, who noted that a prohibition on all solicited bulk email could be construed as a violation of the First Amendment. Sen. Charles Schumer (D-N.Y.), supporter of legislation that would define commercial email as advertising, asserted that commercial speech enjoys fewer First Amendment protections than political speech. AAW Marketing founder and panelist William Waggoner argued that spam filtering and other technological measures are damaging legitimate email marketers, and disagreed with other panelists that spam is an inexpensive marketing tool. "If you guys saw my Internet bill every month, it would floor most people in this room," he insisted. Direct Contact Marketing Group President Gilson Terriberry said that the proliferation of spam is prompting most users to delete unfamiliar email in bulk, and retooling the Internet may be the only solution; such a solution involves fundamentally revising the framework of email and email servers, with the result being a less open Internet. A Tuesday report from the FTC estimates that two-thirds of all spam contains false information, while Sens. Ron Wyden's (D-Ore.) and Conrad Burns' (R-Mont.) CAN-SPAM bill proposes banning deceptive commercial email. However, Washington state attorney general Christine Gregoire took issue with the CAN-SPAM bill, arguing that certain provisions are not as strong as state anti-spam laws.
- "Web-Based Attacks Could Create Chaos in the Physical World"
Internet security researchers presented a paper at a recent ACM Workshop on Privacy in an Electronic Society detailing how criminals or terrorists with a minimum of computer skills and resources could disrupt real-world operations by swamping corporations or individuals with thousands of unwanted catalogs via automated online order forms. A person can locate these forms using a search engine, employ a simple software program to automatically fill in the names, addresses, and cities of the victims, and then submit the forms. "It could be set up to send 30,000 different catalogs to one person or 30,000 copies of one catalog to 30,000 different recipients," explained Avi Rubin of Johns Hopkins University. "This could create a great expense for the sender, a huge burden for local postal facilities and chaos in the mail room of a business targeted to receive this flood of materials." Rubin added that the order forms are not limited to catalogs, but also requests for parcel pickups, repair service, or deliveries. In addition, attackers could thwart efforts to track them down by loading the program onto a floppy disk or a USB disk, and exploiting Internet cafes. Rubin and his fellow researchers hesitated disclosing their findings until a popular search engine launched its new APIs, thus raising the odds of such attacks happening. "To prevent these damaging activities, we need to look at the interface between cyberspace and the real world and to make sure there is a real person submitting a legitimate request, not a computer program launching a disruptive attack," Rubin declared. The team presented several preventative measures, including making online forms difficult to be picked up by a search engine; eliminating common field names by changing the forms' HTML coding; and adding a Reverse Turing Test in each form that must be completed by a human user.
- "Making Intelligence a Bit Less Artificial"
New York Times (05/01/03) P. E1; Guernsey, Lisa
Amazon, NetFlix, and other online retail services rely on automated recommender systems to anticipate customer purchases based on past choices; however, a February report from Forrester Research found that just 7.4 percent of online consumers often bought products recommended by such systems, roughly 22 percent ascribed value to those recommendations, and about 42 percent were not interested in the recommended products. To improve the results requires the enhancement of recommendation engines with human intervention, according to TripleHop Technologies President Matt Turck. One of the key ingredients of today's recommendation technology is collaborative filtering, in which a buyer is matched to others who have bought or highly rated similar items. Commonplace problems with this methodology include cold starts, in which predicting purchases is difficult because the system lacks a large database of people with similar tastes, and the popularity effect, whereby the computer delivers recommendations that are pedestrian and prosaic. Some companies try to avoid such problems by adding a human element: Barnesandnoble.com, for instance, employs an editorial staff to tweak recommendations. "If it is not vetted and monitored by humans and not complemented by actual hand-selling, as we say in the book industry, it doesn't feel like there is anybody there," notes Barnesandnoble.com's Daniel Blackman. Some recommendation engines, such as Amazon's, can improve results with customer input via continuous editing of consumer profiles and special features--alerting the e-tailer not to make recommendations based on a purchase that is a gift for someone else, for example. Some companies also want recommender systems to prioritize surplus items in order to better manage inventory, a strategy that could engender consumer distrust, say software developers.
(Access to this site is free; however, first-time visitors must register.)
- "FCRC Draws Researchers from a Host of Disciplines"
Attendees of the Federated Computing Research Conference (FCRC) in June will find a wide assortment of affiliated research fields under one roof. The annual conference, to be held June 7-14 in San Diego, Calif., will assemble meetings, workshops, and tutorials around 16 different subdisciplines within computer science. The event provides researchers and engineers with the latest technical news from their own fields as well as the chance to exchange ideas and contacts with colleagues from other branches of R&D and CS. Mornings of FCRC week begin with joint plenary talks on topics and issues concerning the computing research community. Plenary speakers will include Michael Rabin; Michael Flynn; Barbara Liskov, Hector Garcia-Moline; and James Kurose. The attendees then branch off to their respective conferences and associated workshops translating to some 32 technical events running concurrently every day. Among the other featured events at 2003 FCRC will be the 2002 ACM Turing Award lecture from recipients Leonard M. Adleman, Ronald R. Rivest, and Adi Shamir, honored for their role in the creation of the most widely used public-key cryptography system, RSA. And a panel discussion on computing and retaining women and minorities in CS research.
For registration information, visit http://www.acm.org/sigs/conferences/fcrc
- "Off the Hype Meter"
CNet (04/29/03); Oltsik, Jon
The technology industry hype machine is running on overdrive even though the products currently available leave a lot to be desired, writes Jon Oltsik, who compares the promised benefits of "hot" technologies with their actual performance. He notes that only a few storage interoperability standards of any real value--Bluefin and Common Information Model--have emerged despite vast engineering resources, and they have not taken off in the marketplace. Oltsik thinks the concept of business impact management (BIM) should be laid to rest because it is a software solution for a problem that is primarily based on skills, policies, processes, and operations, while Tablet PCs lack a killer app and are undermined by poor handwriting recognition. The symbiotic relationship between Web services and Microsoft's .Net, as well as the huge array of .Net-related applications Microsoft is touting, has led Oltsik to think that Microsoft launched .Net to sow confusion in the marketplace. He admits that voice over IP (VoIP) can yield solid financial returns, but notes that proprietary technology, buggy systems, and the need to upgrade every few years limits its usefulness. Oltsik compares grid computing to hydrogen-powered autos in that it will eat up a lot of time and money before it is practical. He writes that wireless technologies such as Wi-Fi are grossly overhyped, considering their security and management shortcomings. Oltsik observes that the controversy that has erupted over Linux has eclipsed the fact that it is an operating system that may serve some enterprise applications better than others, while the value of intrusion prevention systems is being undercut by product immaturity, compatibility restrictions, and frequent false-positive incidents.
- "Robot Science Puts On a Friendly Face"
USA Today (05/01/03) P. 1D; Baig, Edward C.
Academic and corporate research labs worldwide are engineering robots designed to increase human comfort levels by taking on mundane tasks as well as carry out more sophisticated operations. Carnegie Mellon University, through partnerships with other academic institutions as well as entities such as the Naval Research Laboratory, has made strides with innovations such as Personal Robotic Assistants for the Elderly (Pearl), a camera-equipped "nurse-bot" that can carry items for senior citizens; and Graduate Robot Attending ConferencE (Grace), a machine with animated facial features that performed many functions at a July artificial intelligence conference without human assistance, including registering for the event, navigating throughout the convention, and delivering a PowerPoint presentation. However, the advantages of robotic assistance are often depicted unrealistically thanks to the popularity of sci-fi, and truly successful robot assistants will need to combine mobility and perception. Jordan Pollack of Brandeis University adds that cheap human workers could offer a better return on investment than robots programmed to carry out the same chores. Hans Moravec of Carnegie Mellon has outlined an evolutionary path for robotic intelligence up to the mid 21st Century: He believes first-generation "universal robots" will emerge by 2020, while second-generation machines capable of cognition will debut 10 years later. Third-generation robots, which should appear around 2040, will be capable of like and dislike, possess conversational abilities, and model behaviors. The following decade could see robots that can reason and think abstractly, have human-like brainpower, and outperform people in various tasks, a development that could cause humans in certain professions to be phased out.
- "Scientists Examine IT's 'Human Factor' "
NewsFactor Network (04/30/03); Martin, Mike
Researchers working on the National Science Foundation's (NSF) Management of Knowledge-Intensive Dynamic Systems (MKIDS) program are leveraging information technology to determine how social networks among persons and groups shape the organizational architecture. This will help them simplify the processes affecting the organization's ability to respond to expected and unexpected changes. Kathleen Carley of Carnegie Mellon University is building computational models based on IT data--phone calls and email, for example--that can map out an organization's internal structure and identify potential "failure points." Meanwhile, Raymond Levitt and Stephen Barley of Stanford University are trying to construct weakness-free organizations by refining workplace social networks. "Our MKIDS-funded research will attempt to model how team participants from different national cultures--who have radically different core values, cultural norms of behavior and work practices--interact on global projects," Levitt explains. NSF program director Suzanne Iacono notes that such models will yield systems that track and react to changes occurring throughout the managerial levels and physical locations of an organization. Spurred by national security issues, the MKIDS project seeks to build systems that are far more advanced than data-mining tools, which Levitt says are usually "developed with inadequate user input--and with no ability for their designers to simulate the workloads that will be imposed on both users and their managers by the new IT-enabled work processes." Levitt declares, "The MKIDS research introduces social science into a mix of management science and computer science."
- "Artificial Intellect Really Thinking?"
Washington Times (05/01/03) P. C9; Reed, Fred
A computer can be labeled as artificially intelligent, but the intelligence--if indeed that is what it is--actually resides in the program, writes Fred Reed. However, he points out that such programs, once deconstructed, consist of incremental steps that by themselves do not indicate true intelligence. A program such as the one IBM's Deep Blue used to beat chess champion Garry Kasparov in 1997 works out all potential moves from a given board position simply and mechanically via a "move generator." Just as mechanical are the rules that the program uses to choose the optimal maneuver. However, Deep Blue could be rated as intelligent by mathematician Alan Turing's supposition that an intelligent computer can interact with a person so well that that person cannot distinguish it from a human being. Reed writes that most people can identify intelligence without clearly defining it, so the term itself is subject to interpretation. "For practical purposes, and certainly in the business world, the answer seems to be that if it seems to be intelligent, it doesn't matter whether it really is," he notes. The convergence of speech recognition, robotic vision, and other technologies is paving the way for practical machines that at least appear to be intelligent, such as robots designed to care for the elderly in Japan.
- "Seashell Offers Digital Memories"
BBC News (04/30/03)
BTexact has developed a methodology in which physical mementos can be scanned into a computer and connected to related digital content--emails, photos, video, Web sites, text messages, etc. This content can be accessed by placing the memento--a child's toy, a souvenir seashell, and so on--back on the scanner after it has been stored. Developers hope that such techniques will help people vanquish their reluctance to use personal computers. "Lots of people don't want PCs in their normal living areas but mementos are always on display," notes Andy Gower of BTexact. BTexact is currently discussing a commercial rollout of its innovation with several companies. Such novel digital storage systems could be commercially available within a year and a half. BTexact and other research groups are also developing artificial plants that can be used to access digital communications. Meanwhile, MIT's Tangible Media group is researching interfaces that integrate digital information, the physical environment, and people.
- "Scientists Study Quantum Computing Feasibility"
Daily Californian (04/30/03); Maekawa, Joji
Supercomputers with billions of times the speed and computing power of current models could become a reality thanks to a multidisciplinary quantum computing research effort at the University of California, Berkeley. "What would take millions of years to do on a classical computer could be done in minutes on a quantum one," says Mark Hillery of City University of New York's Hunter College. He notes that the key to quantum computers' vast processing ability and speed is the quantum bit, which can exist as 0 and 1 simultaneously. UC Berkeley project leader and chemistry professor K. Birgitta Whaley says the aim of the project is find a way to exploit the varying states of each quantum particle. "The short-term goals are to figure out how to control these quantum states of matter, to be able to do what you want them to do," she explains. "The long-range goal is to have a scalable device where you can have hundreds or up to thousands [of qubits] at will." Maintaining the desired states of all the particles requires employing error avoidance methods to control the positions of each state. Hillery believes Whaley's work could lead to significant advances in the fields of computer science and cryptography; quantum computing could also contribute to scientific analyses that yield cures for diseases such as Parkinson's and Alzheimer's.
- "The Great IT Complexity Challenge"
NewsFactor Network (04/30/03); Brockmeier, Joe
Autonomic computing promises to clear up complexity in company IT operations, freeing people from mundane maintenance tasks and handing those functions over to computers themselves. Major IT vendors are latching on to autonomic computing not only as a way to reduce complexity and save money, but also to make the IT infrastructure more adaptive to business demands. IBM autonomic computing director Miles Barel says there are four required elements of autonomic computing: Self-configuration, self-healing, self-optimization, and self-protection. He also gives a maturation schedule for autonomic computing and says most enterprises have deployed managed services, but do not use prediction or adaptive systems. Gartner analyst Tom Bittman says that standards are a looming issue, and notes that customers are wary of buying all of their products from one vendor in order to get autonomic capability; he explains that autonomic computing eventually will mean less room for actual staff in the IT department since systems will largely run themselves, but at the same time the purpose is primarily increased flexibility and performance, not saved costs. Sun Microsystems' Yael Zheng says autonomic computing will move IT employees up the ladder in terms of what they contribute to the business. Instead of maintaining hardware, they can focus on more critical functions such as writing business applications. Autonomic computing involves virtualizing the IT infrastructure, which also boosts utilization rates since resources can be provisioned across the enterprise unhindered.
- "Harvard Event Showcases Russia as Outsourcing Site"
IDG News Service (05/01/03); Roberts, Paul
The second annual Russian IT Seasons event held this week at Harvard University is an opportunity for Russian firms to highlight the advantages of outsourcing software research and development to their country, including lower wages and a large pool of skilled programmers. Russoft President Valentin Makarov said that Russian tech talent is focused toward design and development work, which fosters the creation of unique and patentable products. He and other American and Russian representatives spoke highly of Russia's stability; BridgeQuest CIO Lee Erlikh said that Russian engineers are highly motivated, contrary to the stereotypical view of an unenthusiastic workforce. He noted that his firm helped Relativity Technologies in the United States retool its signature product using an R&D team that was more than 90 percent Russia-based. Meanwhile, Luxoft official Derrick Robinson estimated that U.S. companies could save up to 65 percent on development costs by outsourcing to Russian firms. Unlike other offshore development centers, Russian programmers are more devoted to accuracy, documentation, and contingency planning--factors that could boost upfront development costs, but also yield compensation "when things don't go right," according to Robinson. Outsourcing to Russia is not without its difficulties: Projects could be impeded by language and management barriers, while Robinson noted that companies should not try to set up an offshore development center on their own, but rather team up with a well-entrenched "trusted partner" such as his company. Erlikh advised companies to establish a formal management architecture and communications channel with Russian development teams.
Click Here to View Full Article
- "A Misnomer Taken to the 'Extreme' "
EarthWeb (05/01/03); Stewart, Jim
Extreme programming is a methodology that ensures the best results when programming teams are faced with ambiguous and shifting project expectations. Despite its name, extreme programming actually incorporates all the most conservative practices possible into the development process. Unfortunately, many people are put off by the name that conjures images of unprecedented techniques with little regard to standard procedures. Extreme programming, more aptly called "extremely conservative programming," actually makes development as ordered as possible in an effort to deal with the unexpected. Planning is done only for the most near-term project and for the smallest feature that can be delivered in the shortest amount of time; this avoids building something useless or misunderstood. Work should be described in simple terms to users and tested after every task. Customers should be kept in close communication and programmers should never write code alone, but always jointly so that code is standard, documented, and according to design. Extreme programming proponents say all aspects of the methodology should be adopted for the best results, but wholesale adoption often means dealing with people's perceptions and attitudes.
- "Data Security Measures Failing to Match Legal Expectations"
Computerworld (04/28/03); Vijayan, Jaikumar
Increased governmental regulations concerning companies' computer security and data privacy policies have heightened firms' legal exposure, according to experts. The Health Insurance Portability and Accountability Act, the Gramm-Leach-Bliley Act, the Sarbanes-Oxley Act, and several proposals at the state and federal level all mean businesses have greater legal liability. On the one hand, what companies promise and actually perform needs to be absolutely clear, but at the same time the regulations themselves are broad and unspecific in terms of technology. The best organizations can do now is to set up the basics: An access-control system, audit-control, encryption for sensitive data, and administrative aspects such as policy and training, according to Lew Wagner of the University of Texas. The American Bar Association's Jon Stanley believes the most likely development is that the courts will define the technical thresholds for the legislation. Until that point, companies need to keep tight controls over access and transaction logs, and set up ordered methods of response in case of security breaches, says Wagner.
Click Here to View Full Article
- "Group Offers Help for Women in the Tech Sector"
TechRepublic (04/28/03); Bell, Tina Jenkins
The Association for Women in Computing (AWC) is a professional group created to give women a leg up in the technology sector. The organization currently consists of 2,000 members and about 20 active chapters throughout 13 U.S. states and the District of Columbia. The association is chiefly oriented toward U.S.-based female employees, but is also open to women in other countries as well as men. Technology consultant and AWC national president Suford Lewis praises the organization for fostering an informal atmosphere where women can be open and honest. AWC's primary goals are to help technology-savvy women communicate and network, provide them with a foundation for entry and advancement, and offer them opportunities for professional development. Communication and networking opportunities are presented at monthly chapter meetings and national board meetings, as well as national and local chapter Web sites that promote current events and projects. AWC offers scholarships and initiatives to fuel an interest in technology among girls, and hosts events at the national and chapter levels that honor women who make significant contributions and achievements in the tech field. Both employed and unemployed members can improve their professional development through AWC seminars designed to keep them abreast of corporate market and industry happenings, as well as offer them continuing education units for certifications or employment-based prerequisites.
Click Here to View Full Article
(Access to this site is free; however, first-time visitors must register.)
For more information about ACM's Committee on Women in Computing, visit http://www.acm.org/women
- "Big Brother: Is He Watching You?"
Government Technology (04/03); McKay, Jim
Legislators and privacy supporters are critical of the government's efforts to clamp down on terrorism using the latest technologies to gather, analyze, and share surveillance data on Americans; they fear that such measures will create an Orwellian state that erodes personal privacy and persecutes innocent people. The chief architectural component of this system would be a Terrorist Threat Integration Center, where citizen profiles would be used to root out potential terrorists. Courting controversy are data-mining programs such as Total Information Awareness (TIA) and the Computer-Assisted Passenger Pre-Screening System II (CAPPS II), while anti-terrorism legislation proposed by the Justice Department has privacy proponents on edge because it removes oversight on presidential powers and would allow law enforcement agencies to share sensitive data on citizens without their permission. A major critic of data-mining and electronic surveillance projects is former Virginia Gov. James Gilmore, who argues that such measures break with U.S. tradition and would create an environment that "changes [Americans'] conduct and influences whether or not they are really a free people." Other critics contend that data-mining systems such as TIA would have error rates that generate many false positives, or would institutionalize racial profiling or other reprehensible cataloguing practices. Gilmore thinks the job of ensuring privacy protections should be left to strong regulation rather than to a Homeland Security Department privacy officer. The TIA has become a serious point of debate, but critics warn that focusing primarily on TIA could allow lesser-known measures like CAPPS II to slip under the radar. Former National Security Agency general counsel Stewart Baker believes the best solution is a government-driven data-mining system with built-in privacy safeguards and accountability.
- "Exporting IT Jobs"
Computerworld (04/28/03) Vol. 37, No. 17, P. 39; Hoffman, Thomas; Thibodeau, Patrick
The number of U.S. companies hiring cheap offshore labor for routine IT operations is increasing as a result of their need to reduce IT costs and balance out desired, salaried employees and workers they can temporarily hire on an as-needed basis. CIOs are looking for IT talent that are also adept project managers and business/IT liaisons, in keeping with the corporate goal to achieve "specialization and reliability," according to Mark Hauser, CEO of Cap Gemini Ernst & Young's Americas division. A November 2002 report by Forrester Research analyst John C. McCarthy predicts that 3.3 million white-collar jobs and $136 billion in wages will be outsourced to Russia, the Philippines, India, and other countries by 2015. Meanwhile, Maria Schafer of Meta Group anticipates that outsourcing increases and the desire for more flexible schedules and project variety will spur as much as half of the American IT workforce to switch to contract work by 2007. Also driving the erosion of the U.S. IT job market is a corporate shift away from building software in-house in favor of purchasing off-the-shelf products. "If you buy the argument that a lot of IT has become commoditized, [then] we are becoming inventors, creators, integrators and architects, and we are going to send the production offshore," argues Cutter Consortium consultant Steve Andriole. The current economic climate is fueling the offshore outsourcing of programming, but Gartner analyst Rita Terdiman reports that application development and infrastructure support operations are also being exported. The outsourcing boom is forcing IT employees to consider blue-collar strategies, such as union membership, to ensure their livelihoods.
Click Here to View Full Article
- "Bright New World"
New Scientist (04/26/03) Vol. 178, No. 2392, P. 30; Schechter, Bruce
Optical applications are being rethought thanks to the advent of plasmonics, which has the potential to revolutionize nanotechnology. Experiments by Thomas Ebbesen and Peter Wolf of NEC Research Institute established that light directed on a metal foil perforated with nanoscale holes excites surface plasmons, causing them to build up an electrical field that penetrates the metal and excites plasmons on the opposite side, yielding far more light than is actually focused on the metal. Ebbesen, now at the Louis Pasteur University, last year demonstrated that a metal surface with a single hole surrounded on both sides by an inscribed bullseye pattern of concentric circles focuses the plasmons in a similar manner, channeling the light into a tight beam. These plasmonic apertures, if incorporated into existing lithography equipment, could help computer makers etch smaller circuits and thus build faster and cheaper computers. Meanwhile, a research team at Pasadena's California Institute of Technology has built a waveguide featuring an insulator lined with metallic nanospheres; shining a light on one nanosphere causes the surface plasmons to vibrate, triggering a transfer of energy to a neighboring nanosphere, and so on. The plasmonics application with the greatest potential would be the channeling of photons through nanoscale circuits, while Rice University's Naomi Halas and Jennifer West think medical diagnostic tests and drug delivery systems could be significantly enhanced with the technology. They have created gold-coated silica spheres, or "nanoshells," whose surface plasmons are excited variably by light according to the thickness of the coating. They also think their nanoshells could be used as an insulin delivery system implanted under the skin and activated by light. Another possible plasmonics application is the fabrication of "perfect lenses" that offer superior resolution.
- "Get Real"
Darwin (04/03); Boyd, Stowe
The chief value of instant messaging and other forms of real-time communication (RTC) services is presence, and this is fundamentally changing business communication, writes A Working Model managing director Stowe Boyd. To take advantage of this development, companies will need to make sizeable investments and forge strategic partnerships. In addition to first-order productivity benefits (accelerated information transfer and increased parallelism, for example), RTC services carry second-order benefits--namely, the ability to do new things--that offer even greater returns. Boyd cites Internet protocol designer David Reed, who theorizes that that a network's value is determined by the number of social groups it supports; this value grows exponentially as the population increases, transforming the economic model. IM providers are refurbishing first-generation IM products into services so they can establish interoperability with other applications, and are disassociating presence from IM in order to embed presence in literally any device. Boyd predicts that mobile devices, PCs, and phones will feature native presence support via the operating system or presence-sensing client software, while pseudo-intelligent agents or bots will enjoy a short surge in use until RTC services and presence are firmly enmeshed within the application services layer of next-generation enterprise architectures. All enterprise applications will be optimized over the next several years to support presence and RTC. Further down in the enterprise architecture stack, the need for real-time processing will prompt a comprehensive reassessment of the architecture. Every level of the enterprise architecture will be reshaped by third-generation IM systems, Boyd writes.
- "Personalizing Web Sites With Mixed-Initiative Interaction"
IT Professional (04/03) Vol. 5, No. 2, P. 9; Perugini, Saverio; Ramakrishnan, Naren
Saverio Perugini and Naren Ramakrishnan of Virginia Polytechnic Institute believe that a truly personalized Web site will personalize the user's interaction, and a mixed-initiative architecture where the user can control the interaction is the best option. They characterize browsing as an example of directed dialogue because the Web site takes the initiative by providing an array of hyperlinked choices that the user must respond to; the disadvantages of such a model, which usually consists of a Web site with multiple browsers, include supporting an exhaustive number of potential browsing scenarios and over-specification of the personalization goal. Perugini and Ramakrishnan believe plugging an out-of-turn interaction toolbar into the browser will support mixed-initiative interactions and enable the user to take charge within the Web site: This will eliminate the site's need to directly uphold all potential interfaces within the hyperlink framework, reduce the interface's clutter, and make the interaction more akin to a natural dialogue. Web site personalization is streamlined to a partial evaluation of a representation of interaction, and the authors write that the Extensible Style Sheet Language Transformation (XSLT) engine is a good choice for easy deployment. Programs are first modeled in XML, and then an XSLT style sheet is outlined for each user input and applied on the XML source; dead ends are shaved off via additional post processing transformation, while high-level XSLT functions can expedite link label ordering on recreated Web pages, among other processes. Perugini and Ramakrishnan claim that this approach can unify other types of Web site personalization and model dynamic content. Other areas they are investigating include inverse personalization, multimodal Web interface design, and mixed-initiative functionality based on VoiceXML. Transformation-based personalization strategies will become more important as wireless devices and the display of only the most relevant data on handheld computers become prevalent, the authors contend.