Association for Computing Machinery
Timely Topics for IT Professionals

About ACM TechNews

ACM TechNews is published every week on Monday, Wednesday, and Friday.


ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either AutoChoice Advisor or ACM. To send comments, please write to technews@hq.acm.org.
Volume 6, Issue 594:  Wednesday, January 14, 2004

  • "Patents Out of Control?"
    USA Today (01/13/04) P. 1B; Davidson, Paul

    Industry and government officials agree that the clearest sign of a patent system running amuck is surging numbers of patent lawsuits, which are becoming problematic for companies large and small and are threatening to stifle technological innovation. Critics charge that many patents at the center of such lawsuits--of which those covering software and business methods are among the most notorious--are obscure or false, but are being approved by a U.S. Patent and Trademark Office (PTO) ill-equipped to thoroughly research the validity of vast numbers of patent applications. "Very bad patents are...draining millions of dollars that could be spent on finding a better mousetrap," notes lawyer Mark Banner. Approximately 180,000 patents were awarded by the PTO in 2002, while about 700 more patent examiners need to be hired to manage a backlog of 500,000 applications, according to officials. Bad patents are bad news to small companies that lack the financial resources to weather lawsuits, while larger companies must divert research funds to build patent portfolios to protect them from opportunistic patent holders. Critics contend that the patent system has given rise to new kinds of companies that exist for the sole purpose of stockpiling patents and collecting royalties, without injecting anything into the economy. PTO deputy undersecretary for intellectual property Jon Dudas counters that most patents are legitimate, and insists that the patent system is a neutral institution. Meanwhile, Walker Digital's Jay Walker argues that the strong property rights of the United States are directly responsible for the country's world lead in innovation.
    Click Here to View Full Article

  • "To P2P or Not to P2P?"
    NewsFactor Network (01/13/04); Martin, Mike

    Hewlett-Packard principal research scientist Mary Baker argues that companies could use peer-to-peer (P2P) systems for useful and beneficial applications beyond the file-sharing or song-swapping that most P2P services are notorious for; Intel researcher Petros Maniatis lists data backup and Internet routing as examples of such applications. "In a corporate environment, P2P backup makes a lot of sense, because I essentially know that my colleague...has strong incentives to hold my backup copies," Maniatis explains, while Baker adds that "mutually suspicious" peers can support a more trustworthy, attack-resistant service. Maniatis notes that most P2P academic research has focused on algorithms to enhance efficiency, scalability, strength, and security, while defining "P2P-worthy" applications has fallen by the wayside. Beneficial P2P applications can only be designed and deployed if software designers climb a decision tree modeled after actual and proposed P2P systems. Baker says that a true P2P virtual environment must be self-organizing, in that peers can find one another within it; it must promote peer equality, also known as "symmetric communication;" and it must follow a decentralized schematic, with peers given ample amounts of autonomy. Climbing the decision tree requires moving across five branches, budget being first and foremost. The next two branches to scale are relevance and rate of system change, with Maniatis defining relevance as the likelihood that a peer wants to see data from fellow peers, allowing cooperation to evolve naturally but gradually. Trust and criticality are the last two branches to climb: Baker points out that mutually distrustful peers could be relevant or irrelevant to the problem at hand, but cautions that the problem's relevance to the users could override a P2P solution if they demand centralized control regardless of technical standards.
    Click Here to View Full Article

  • "No Safety Net for Programmers"
    Salon.com (01/12/04); Mieszkowski, Katharine

    U.S. software programmers may not be eligible for wage insurance and other assistance that is summarily granted to workers in the manufacturing sector. The Trade Adjusted Assistance Reform Act of 2002 was meant to offset the financial loss to workers whose jobs producing "articles" was displaced by globalization and free trade, but the U.S. Department of Labor has repeatedly denied laid-off software programmers assistance because it classifies software a service and not an article. Attorney Michael G. Smith is bringing a class-action lawsuit against the government on behalf of a number of former technology workers who lost their jobs due to foreign outsourcing and were denied either retraining, tax credits, health insurance, job search assistance, or temporary wage assistance, as is allowed for workers over 50. University of California, Santa Cruz, economics professor Lori G. Kletzer says there is no justification for excluding white-collar workers from the assistance program since they bear the brunt of increasing foreign competition and economic restructuring the same as their blue-collar colleagues. Some economists think the issue of comprehensive wage insurance will become more important as U.S. workers understand the threat posed by globalization. Smith says rulings against tech workers by the Department of Labor are not even consistent, and notes at least five cases where workers received benefits compared to more than 40 cases where workers were denied assistance. Not all labor activists are united behind the wage insurance issue: Washington Alliance of Technology Workers organizer Marcus Courtney says that efforts should be directed instead at changing policies to better protect American technology jobs.
    Click Here to View Full Article
    (Access for paid subscribers only.)

  • "Bugs Taking Over Robot Guidance"
    Wired News (01/14/04); Sandhana, Lakshmi

    Insect vision is the inspiration for a new type of unmanned aerial vehicle (UAV) being developed by scientists under the Controlled Biological and Biomimetic Systems program of the Defense Advanced Research Projects Agency. The sensor-based navigation systems employed by large-scale, high-flying UAVs are not well-suited for smaller vehicles with shorter wingspans that operate at lower altitudes. Insects can navigate extremely well by vision despite drawbacks such as low-resolution eyes, tiny brains, and limited depth perception, and scientists attribute this gift to a phenomenon known as optic flow. "The principle is simply that, if the insect flies along a straight line, objects that are near it appear to whiz by much more rapidly in the eye than objects that are far away," explains M.V. Srinivasan of the Australian National University. "Thus, the distance to an object can be inferred in terms of the velocity of its image in the eye--the greater the velocity, the nearer the object." A research team coordinated by Srinivasan and Javaan Chahl of the Australian Defense Science and Technology Organization are developing aircraft equipped with small video cameras that send signals to a ground station that computes optic flow via image analysis, and then relays back the proper commands. Meanwhile, Centeye CEO Geoffrey L. Barrows is working on optic flow sensors capable of concurrent image capture and processing; the imaging chips Barrows has developed partially "digest" an image before sending the information to a backend processor. Barrows has built miniature aircraft outfitted with complete vision systems that are lightweight and power-efficient, and that maintain a constant altitude in flight, ascend or descend, and avoid collisions. Insect vision technology has many potential applications, including interplanetary exploration, intelligent toys and vehicle systems, autonomous robots, and panoramic imaging systems.
    Click Here to View Full Article

  • "You Are What You Watch"
    CIO (01/12/2004); Bass, Alison

    A recent story in Communications of the ACM indicates that new data mining software that can build profiles of TV watchers and deliver targeted advertising to them through digital personal video recorders such as TiVo is being tested. These systems would be a boon to a TV industry facing eroding commercial programming profitability as TiVo and similar devices, which offer viewers the luxury of skipping commercials, grow in popularity. The TiVo would record the user's viewing habits, while software within the recorder would construct a demographic profile through statistical modeling; once the profile is complete, the system would select ads from a stock inventory and download them to the viewer's video recorder. Duquesne University's William Spangler, the lead author of the CACM article, notes that the ads would run before the program, in the same way that commercials are played before a movie at a cinema. Before the software can be commercially applied, privacy issues must be resolved: Spangler says that an opt-in policy, in which advertisers get permission from viewers to send them commercial material, is the only workable approach. "As part of getting a better price on their [personal video recorder] or some other incentive, people would agree to share their viewing behavior and perhaps provide additional information," he explains. "And then they'd be able to view more relevant ads." However, CRM executive editor Allison Bass points out that the entertainment industry has long preferred opt-out policies, whereby users must request that companies stop sending them ads.
    Click Here to View Full Article

    "Using Data Mining to Profile TV Viewers" was featured in the December 2003 Issue of Communications of the ACM. Click Here to View Article

  • "Making the Grid Transparent to Users"
    Innovations Report (01/13/04)

    The IST GridLab project, which consists of 11 collaborating institutions led by Poland's Poznan Supercomputer and Networking Center, aims to supply EU Grid users with a simple yet powerful environment for developing Grid applications by producing a set of Grid services that include dynamic resource brokering, data management, and monitoring for both developers and end users. Users can avail themselves of such services through the Grid Application Toolkit (GAT), which allows applications to employ whatever Grid resources are available at the beginning of a particular programming task. "Users don't have to worry about which service or resource they are accessing--they use the same API," notes project coordinator Jarek Nabrzyski. "GAT chooses the best resource available automatically." He explains that GAT's major advantage to users is that it enables them to employ the Grid in the least complicated way. GAT is comprised of an API, a library, and a set of Grid middleware that connect applications to Grid resources while keeping the complexity of the Grid hidden from programmers. GridLab researchers are also devising and assessing Grid applications on testbeds built from linked supercomputers and other global resources. Several large user communities, a European astrophysics network and U.S.-funded consortia among them, are conducting tests.
    Click Here to View Full Article

  • "Growing Up With Lucy"
    Slashdot (01/13/04); Wilcox, Sue

    The goal of researcher Steve Grand is to build an android that can develop a mammal-like intelligence, and his pursuit of this objective thus far is detailed in his book, "Growing Up With Lucy," in which he discusses, among other things, the neurological principles that went into the construction of a machine designed to resemble an orangutan in form and function. Lucy, as the robot is called, is only capable of emitting grunts and picking up a banana upon request, but she is one of the most sophisticated research robots in the world. Even if Grand runs out of money before his dreams can be fully realized, the knowledge about human brain functions he has accrued in the course of his experiments could be spun off into other useful technological advances. A key phase in Lucy's development is software that replicates neurological systems, which Grand outlines in his book; central to this vision is his belief that the human brain is composed of "general purpose building blocks," each of which is modified from a fundamental design. The scientist argues that "being of one mind does not imply that all the information passes through a single controlling structure," while free will may actually be an illusion. Grand also strongly believes that the development of intelligence hinges upon emotion and imagination, with the latter being essential to how the brain simulates and anticipates the world's behavior so that people can respond accordingly. Grand further theorizes that dreaming is the brain's way of sustaining its linkages and infrastructure during sleep, and subscribes to a model of a cortical map that generates its own thoughts in the absence of outside influences, which explains daydreaming and internal monologues. The author explains that his experiments with Lucy seek to bypass what he considers to be a serious impediment to the progress of current artificial intelligence research.
    Click Here to View Full Article

  • "Is That Customer Service Rep Real or Virtual?"
    Investor's Business Daily (01/13/04) P. A7; Krey, Michael

    Virtual humans (V-humans), which are currently employed as Web site and call-center customer service reps, are computer programs with a human-like presence, either visually (with a face and body) or simply audibly (a disembodied voice). The catch is that most V-humans are boring, and consultant and author Peter Plantec wants to make such computerized agents interesting and more interactive by giving them personalities. He believes V-humans, which work 24/7, do not draw a salary, and can be more knowledgeable than average customer service reps, will revolutionize business operations; in fact, he says studies indicate that V-humans are capable of answering roughly 60 percent of all customer questions. Plantec notes that most companies usually select stiff, uninvolving V-humans that leave people cold because they are less expensive to develop than more engaging V-humans. He adds that V-humans can make much more efficient educators, since they can adjust to students' learning styles on the spur of the moment. He thinks such a trend is inevitable, since V-human teachers are a compelling economic investment for school districts in tough financial straits. What is keeping V-humans from reaching their full potential is artificial intelligence researchers' lack of understanding of the psychological aspects of human-machine interaction. But the technology also has a dark side: Plantec does not dismiss the possibility that V-humans could be employed by hackers and other miscreants for more nefarious purposes.

  • "Random Acts of Spamness"
    Wired News (01/13/04); Delio, Michelle

    Inserting gibberish in junk email in an attempt to thwart antispam filters--especially those that employ Bayesian analysis--has become more and more commonplace among spammers, though many experts agree that this technique is more likely to backfire. Bayesian filters determine whether each incoming message is spam by analyzing the email's content, and they are adaptable to user requirements in that they scan all mail to ascertain what terminology is likely show up in permitted email and what is not. Spammers are tossing in strings of hundreds of words not common to email sales pitches in the hopes that the filters will misinterpret them as indications of personal correspondence, and also so they might degrade the filters' checklists by forcing them to incorrectly label innocuous words as spam cues. Spamhaus' Steve Linford and SpamBayes developer Anthony Baxter strongly doubt that a well-trained Bayesian filter can be fooled by such deception; Baxter notes that spam messages must contain a lot of "good" words to circumvent the filter, but such good words often vary from person to person. Linford adds that random gibberish, which is known as a hash buster, is a telltale sign of spam. Meanwhile, Baxter points out that spammers cannot put too much gibberish in the message because they still need room for their sales pitch, which is impossible to disguise. Some spammers mask hash busters from the naked eye by formatting them in white text on a white background, but filters can be trained to see through such tricks as well. "You just train your Bayesian filters to look for the presence of white noise, and treat that as a sure sign that the message is spam," says Outblaze's Suresh Ramasubramanian.
    Click Here to View Full Article

  • "The Roots of Failure in Software Development Management"
    Computerworld (01/09/04); Walton, Bill

    Software development has been compared to many things because managers cannot adequately explain why they use their current approach without relying on some abstract analog, writes consultant Bill Walton. In the mid 1950s, SAGE air-defense system software developer Herbert Benington said his pioneering software development group used hardware engineering expertise as the basis for their efforts. To better understand and define software development production, it is helpful to compare it to hardware production; likening software development to other activities such as gardening, writing a novel, or even the game Kerplunk are not useful. At the simplest level, the models of production for both hardware and software are simply defined by the product, design and development, and making stages. At the making stage of hardware production, parts are assembled to create the final product. There is no analogous component for parts in software production, except for precompiled DLLs--instead, source code components are assembled in the design stage while the making stage in software production entails copying code to servers or onto CDs. Every other aspect of hardware and software production is basically the same: In software development, source code statements exist as part designs within the framework/algorithm, or product design; compiling and linking is akin to the making stage in hardware production. With the evident similarities between hardware and software production, it is useful to ask hardware manufacturers whether their company's IT projects use the same approach as their mainline product production.
    Click Here to View Full Article

  • "Is the Tide Turning in Battle Against Hackers?"
    IT Management (01/04); Robb, Drew

    Despite the Internet and computer systems appearing to be under constant assault by ever craftier hackers, security safeguards are progressing faster, as demonstrated by a documented slowdown in exponential damage increases in 2003, compared to previous years. According to a joint Computer Security Institute/FBI report, the percentage of companies experiencing unauthorized computer use fell from 60 percent in 2002 to 56 percent in 2003; furthermore, significant security incident totals remained about the same, but financial losses reported by respondents fell from $455 million in 2002 to $202 million in 2003. The greatest losses in 2003 were attributed to theft of proprietary information, but damages were again significantly lower than in the previous year. However, fewer numbers of organizations experiencing Denial-of-Service attacks were countered by an increase in damage, from $18 million in 2002 to $66 million in 2003; the third biggest threat was viruses, whose collective damage last year totaled $27 million, almost half that of the year before. Symantec's most recent Internet Security Threat Report indicates significant growth in the number of blended threats and a shrinking interval between the discovery of vulnerabilities and the launch of exploits. Odds are more favorable toward network security right now because companies are regarding threats with more seriousness, according to the results of a Business Software Alliance/Information Systems Security Association poll released last December. Seventy-eight percent of respondents claimed their companies were better fortified against major attacks than they were 12 months earlier. However, these positive reports are not an excuse for companies to relax their vigilance or their deployment of cyber-defenses, given the increasing sophistication and speed of hacks, as well as indications that such attacks are the work of organized groups sponsored by enemy governments.
    Click Here to View Full Article

  • "Carnegie Mellon University Technology Will Help Prepare Students for High-Stakes Tests"
    AScribe Newswire (01/12/04)

    The U.S. Department of Education has granted $1.4 million to Carnegie Mellon University, Carnegie Learning, and Worcester Polytechnic Institute to test a Web-based computer tutor Assistment system designed to help middle-school students prep for rigorous standardized mathematics tests without cutting into instruction time. The system rapidly anticipates how a student will score on such tests to help teachers recognize gaps in students' knowledge, adjust their lessons accordingly, and provide individualized tutoring for each student. Kenneth R. Koedinger, principal investigator and Carnegie Mellon associate professor of human-computer interaction in Carnegie Mellon's School of Computer Science, says the system will not be used strictly for evaluation, but will help students learn. He declares, "It's not just going to say 'these kids aren't learning fractions.' It will teach them fractions." An important model for the system is Carnegie Mellon's highly successful Cognitive Tutor, a computer-based educational program currently used in 1,500 schools across the United States. The Assistment system is slated to be tested by eighth-grade teachers in the Worcester, Mass., Public School District this spring as they ready pupils for the Massachusetts Comprehensive Assessment System. Koedinger says that district was chosen partly because of its high percentage of poorly performing minority and low-income students, and points out that the Assistment system can be modified for use in other states. With the Department of Education grant, researchers will be able to draw a comparison between the performance of students who use the Assistment system and those who do not.
    Click Here to View Full Article

  • "The Next Big Thing for Wireless?"
    Business Week (01/19/04) No. 3866, P. 78; Reinhardt, Andy

    IT vendors and network service providers are backing a new wireless technology that can transmit data at up to 70 Mbps for as far as 30 miles. IEEE 802.16 or WiMax products are under development at Intel, Nokia, Alcatel, and a host of other firms that see tremendous opportunity in so-called "broadband wireless." Among other things, WiMax would be able to reach homes and offices that still are not wired for the broadband Internet; the technology would also change the service provider market by adding an entirely new infrastructure so that companies do not have to lease infrastructure from one another. Despite the benefits, WiMax adoption is still contingent on continued vendor enthusiasm and rapidly declining equipment prices: Winstar and Teligent failed during the dot-com era because each subscriber set-up cost as much as $1,200. Initial WiMax deployments, as an alternative to DSL or cable broadband connections, would probably cost about $400 with rapidly falling prices as standardization takes effect; Intel's incorporation of WiMax in a laptop processor similar to Wi-Fi-enabled Centrino chips would push the price of set-up down much further. Analysts expect an Intel WiMax chip in laptops by 2006, while Pyramid Research projects that WiMax and other broadband wireless services could generate as much as $2.1 billion annually by 2008. A number of network service providers are positioning themselves for the new market, including Nextel Communications, which has recently been buying up broadband wireless licenses around the country. In developing nations, WiMax poses an even greater opportunity since it would allow previously unwired areas to quickly enable broadband services; China Unicom and Serbia's Telekom Srbija are already installing wireless broadband equipment, and Pyramid analyst John Yunker says WiMax could close the digital divide.
    Click Here to View Full Article

  • "Displays Go For Sharper Image"
    Computerworld (01/12/04) Vol. 32, No. 2, P. 28; Robb, Drew

    Organic light-emitting diodes (OLEDs) and 3D display technology offer improved images, but these new technologies are no threat to the dominance of CRT and liquid-crystal display (LCD) screens, at least for now. OLEDs, which have just begun to show up in small electronic devices and are characterized by slow market penetration, are nevertheless expected to bring in $3 billion by 2009, says analyst Kimberly Allen. Unlike LCDs, OLEDs directly emit light with no need of backlighting, which allows them to be thin and capable of being employed in flexible displays; in addition to being theoretically more energy-efficient, they also generate sharper and brighter colors than both CRTs and LCDs, and can reproduce motion with no smearing. Experts vary in their predictions as to when, if ever, OLEDs will break out of their small device niche market into the mainstream, but limiting factors include short lifetimes due to chemical instability and significantly higher cost than similar-size LCDs. 3D display technologies such as Sharp's Actius RD3D notebook boast two diode sets compared to a traditional active-matrix display's one: An array arranged along a wire grid behind the LCD glass to support pixels, and a second matrix known as a parallax barrier; the parallax barrier is invisible while the device is in 2D mode, but switching to 3D mode causes the switching LCD to send alternate pixels to both the user's eyes, generating a 3D effect without the need for special glasses. A 3D vision display such as the RD3D only costs slightly more than a conventional 2D LCD because their manufacturing processes are similar. Drawbacks include lower resolution and brightness in 3D mode and limited optimum viewing angles, but these are outweighed by the 3D effect, according to developer Robin Nixon. Ian Matthew of Sharp Systems of America believes RD3D will initially be used by automotive, pharmaceutical, chemical, and architectural industries to enhance their design work with virtual reality systems.
    Click Here to View Full Article

  • "Ultra Wideband's Destiny Up in the Air"
    Network World (01/12/04) Vol. 21, No. 2, P. 1; Cox, John

    Progress on a high-speed, physical-layer standard for wireless multimedia transfer has stalled in a IEEE task group; the technology will allow for high-speed wireless connections akin to wired USB links. Participants are in an approximately 60-40 deadlock over two competing proposals, with neither side likely to grant the other the necessary 75 percent majority. The IEEE 802.15.3a Task Group narrowed a list of 23 candidates down to just two ultra wideband (UWB) wireless proposals that differ in the modulation techniques used to spread radio signals. The 36-member MultiBand OFDM Alliance (MBOA) backs a proposal using orthogonal frequency division modulation (OFDM) that would split the spectrum up and allow CMOS silicon technology, according to MBOA co-founder and Staccato Communications vice president Mark Bowles. OFDM is touted for its efficiency in capturing radio energy and low interference. The other proposal, Direct Sequencing, has about 40 percent support in the IEEE 3a group and was created by Xtreme Spectrum, now owned by Motorola. At the Consumer Electronics Show, MBOA-member Samsung displayed UWB streaming of HDTV signals using the Xtreme technology. Participant companies are forging ahead with UWB products despite the lack of standard progress; a standard could emerge from marketplace competition, some observers note. Intel's Stephen Wood says the company will begin building on a MBOA UWB specification to be published next month, while Motorola's John Barr says there are customers who are need UWB technology now. Barr says no MBOA members have produced working products for customer testing. IEEE veteran and 3a Task Group chair Bob Heile says the six-month-long dispute is not significant compared to other standards battles, such as over 802.11g, and that there is hardly any market currently since the United States is the only country to support free use of UWB.
    Click Here to View Full Article

  • "Get Mean, Go Green"
    Network Magazine (01/04) Vol. 19, No. 1, P. 37; Greenfield, David

    A team led by Michael Frank at the University of Florida's College of Engineering is using a Semiconductor Research Corporation (SRC) grant to develop a reversible computing power supply as a step toward an adiabatic computer system. In its basic form, reversible computing is computing that boasts almost constant entropy; in other words, no information is thrown out, or at least the amount of data that is ever deleted is reduced. Switches in node voltage from positive to negative yield heat, but Frank's power supply gradually pushes the charge from one node to the next, so only a minuscule amount of energy is lost on each oscillator transition. New chips are another essential ingredient of adiabatic computers, and such chips were designed as far as the proof-of-concept phase at MIT's Reversible Computing group, of which Frank was a member. The corporate world's resistance to the redesign of hardware, software, and development tools in order to boost computing performance may ultimately break down as it becomes increasingly difficult to dissipate produced heat as chipmakers boost the number of gates per chip--conventional computer system performance is expected to hit a wall by 2030, while Frank predicts that the usefulness of current chips could end even sooner thanks to other factors such as thermal noise and leakage. Quantum computers, which are also reversible computers, are being probed as an alternative computing method, but Frank notes that certain puzzles about reversible computing must be solved before a quantum computer can be developed. Adiabatic systems offer an order of improvement that would allow enterprise network vendors to bundle their switching and routing functions with more advanced computer operations without causing packet delay; other benefits of adiabatic systems include improved battery life, while portable devices are well suited for adiabatic techniques. But more energy-efficient non-stop computing systems can only be realized by lowering clock speed by a factor of two, and reducing power by at least a factor of four.
    Click Here to View Full Article

  • "Why Machines Should Fear"
    Scientific American (01/04) Vol. 290, No. 1, P. 37; Gibbs, W. Wayt

    Northwestern University cognitive scientist Donald A. Norman, author of "Emotional Design," sees value in investing computers and software with emotional or affective systems so that the machines can be more reliable and effective. He notes that people who cannot display emotions due to brain damage are decision-impaired, which leads to the conclusion that emotions, like cognition, are used to process information. Unlike cognition's time-consuming process of understanding and interpreting the world, emotion's job is to make quick judgments of a thing's inherent value, good or bad. Norman points out that a feeling of anxiousness leads to a deeper concentration on solving problems, while happiness boosts the creative process. However, applying that theory to computers is a tricky business: Former Sun Microsystems user-interface expert Jakob Nielsen cautions that it could be open to misinterpretation by designers, who could see it as an excuse to prioritize form over function. Norman also thinks that affective systems should be used to not just evoke emotional responses in users, but in the machines as well, a notion that is at odds with many of his contemporaries in the field of human-computer interaction. Stanford University researcher B.J. Fogg reports that giving computers faux-emotions may improve human-computer interaction, but it raises an ethical dilemma. Norman explains that machines should be imbued with "weak methods" such as boredom, curiosity, and fear so that the devices can respectively pursue productive goals, explore unfamiliar environments, and prevent themselves from being damaged or getting into accidents.
    Click Here to View Full Article

  • "Back to the Future"
    The World in 2004 (01/04); Dyson, James

    Future domestic appliances will be networked to provide more efficient and convenient service to owners, as well as cost savings and better information for manufacturers, according to appliance designer James Dyson. Political trends will lead to more congested urban areas where living space is at a premium, making low-cost, energy-efficient homes necessary. The realization of this future is not a networked refrigerator, which is an asinine invention that brings little benefit to the consumer and no value to the manufacturer; instead, interoperable networking components integrated into the core of every appliance would begin to offer real benefits to consumers and vendors. Dyson's self-named appliance firm is developing a X020 motor that will be able to communicate directly with the Dyson service center via binary code transmitted over the phone: Instead of reporting name, address, and appliance serial number, customers would be able to press a button on their machine and put the receiver up to a speaker, allowing the string of beeps to communicate the appliance's serial number, diagnostics, and usage information. This basic type of connectivity could be upgraded in the future to allow for remote software updates, such as gentler wool wash cycles for washing machines. An even more advanced stage of appliance connectivity would allow for advance diagnostics: These communication links would be best set up using a home wireless network and central domestic server, possibly the home PC. It will be necessary for appliance manufacturers to join together on uniform standards for connectivity so that companies could serve customers' holistic appliance needs. Dyson predicts homes in the future will not have many separate rooms but will be reminiscent of lofts where people can more easily see each other and communicate; a "central machine pod" providing all appliance needs would serve this home.
    Click Here to View Full Article

  • "The Web's New Currency"
    Technology Review (01/04) Vol. 106, No. 10, P. 28; Huang, Gregory T.

    Firms such as Peppercoin are expecting to hit it big by offering consumers secure, reliable electronic micropayment systems for low-cost Web content. MIT computer scientists and Peppercoin founders Ron Rivest and Silvio Micali have devised an innovative, highly efficient micropayment scheme designed to cut the overhead cost of electronic payments by using encryption and statistics to avoid charging vendors a fee for every sale of a particular item. The Peppercoin technology processes only a statistical sample of transactions--so that users' credit card accounts are billed once every 100 transactions or so: In beta tests, both the purchaser's and the vendor's computers shield interactions from intruders with special encryption software, while each transaction features an encrypted serial number indicating how many purchases the customer has made over time, for how much money, and for whom. Rivest explains that the statistical method promotes efficiency, while cryptography secures the random selection process and maintains its honesty. But technology is only one ingredient to Peppercoin's success: Its business model may actually be its biggest strength. Unlike online person-to-person payment firms, micropayment companies allow e-tailers to sell inexpensive digital items rather than physical products, and their target market is much smaller than, say, PayPal's customer base of eBay subscribers. Peppercoin and other micropayment startups have adopted a strategy in which they collaborate with Web merchants to determine the kind of content to be sold, and build a brand name with which to court bigger distributors. Skeptics doubt that micropayment systems will be adopted widely, given that they are mainly restricted to niche markets such as MP3 listeners and Web comics enthusiasts; but the spread of low-cost digital content and consumers' readiness to purchase such content--not to mention more and more e-tailers signing contracts with such micropayment companies--is a hopeful sign.
    Click Here to View Full Article