HomeFeedbackJoinShopSearch
Home

       HP is the premier source for computing services, products and solutions. Responding to customers' requirements for quality and reliability at aggressive prices, HP offers performance-packed products and comprehensive services.


ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either HP or ACM. To send comments, please write to [email protected].
Volume 5, Issue 572:  Monday, November 17, 2003

  • "Fast Track for Science Data"
    Wired News (11/17/03); Kahney, Leander

    The first leg of the National LambdaRail (NLR), a superfast network for scientific research that will ultimately extend across 10,000 miles of unused optic cable, will go live on Nov. 17 as Chicago's TeraGrid facility is connected to the Pittsburgh Supercomputing Center. NLR's purpose is to lash together hundreds of U.S. research facilities into a dedicated, high-speed optical network so that these institutions can easily and quickly share the massive amounts of data being generated from complex scientific experiments. "The amounts of calculation and the quantities of information that can be stored, transmitted and used are exploding at a stunning, almost disruptive rate," stated a January report on cyberinfrastructure from the National Science Foundation. "Powerful data-mining techniques operating across huge sets of multidimensional data open new approaches to discovery." The report went on to say that global networks can connect these resources and sustain a greater degree of interactivity and collaboration, and NLR board member Ron Johnson believes the NLR is a critical first step toward such networks. Without making any promises, Johnson stresses that NLR could emerge as the next-generation Internet, while Carnegie Mellon University computer science professor David Farber notes that projects such as NLR will benefit the computer and communications industry, not to mention the industrial base of academic research. One of the first objectives will be a system-wide deployment of 10 Gb Ethernet, which NLR CEO Tom West says will allow researchers to easily plug their computers into the network; Johnson adds that "extreme multimedia" such as "real telepresence" will be supported by the NLR. The NLR is expected to be fully deployed by the end of next year.
    Click Here to View Full Article

  • "DARPA to Overhaul Supercomputing Benchmarks by 2006"
    EE Times (11/14/03); Merritt, Rick

    The Defense Advanced Research Projects Agency (DARPA) is readying a new set of supercomputer benchmarks that would give a more comprehensive and nuanced measure of supercomputer performance, as well as allow designers to experiment with novel approaches. DARPA's High Productivity Computing Systems (HPCS) program has already tasked supercomputer makers Sun, Cray, and IBM in a race to build the next generation of supercomputing hardware infrastructure, and now is working on the benchmarks that would test those computers. The new benchmarking effort is emblematic of recent changes in supercomputer design. After NEC's proprietary Earth Simulator in Japan took the top spot, another big format change was heralded with the 10.28-Tflop Apple G5 cluster built at Virginia Tech; the system cost just $5.2 million to build compared to about $400 million for the Earth Simulator. The system is expected to claim the third fastest spot on the upcoming Top 500 list to be announced at the ACM's Supercomputing (SC2003) conference later this week in Phoenix, and also marks the first entry of an Apple-based system on the supercomputing list, according to Top 500 list administrator and University of Tennessee professor Jack Dongarra. SC2003 Chairman and Lawrence Livermore's Institute for Scientific Computing Research deputy director Jim McGraw says the last 18 months have seen tremendous challenges to incumbent supercomputing design architecture. He says no architecture is a clear winner, with each system having different strengths and weaknesses. For example, clusters do not work well on unpredictable global memory-address patterns, irregular meshes in some simulations, and large graphing problems, according to an HPCS source who asked to remain anonymous.
    Click Here to View Full Article

  • "Encryption Revolution: The Tantalizing Promise of 'Unbreakable' Codes"
    Associated Press (11/16/03); Bergstein, Brian

    Supposedly uncrackable quantum encryption has begun to emerge in the wake of two decades of research, as signified by a new system MagiQ Technologies began to sell commercially this month. MagiQ CEO Bob Gelfond says the new system, dubbed Navajo, offers a major advantage over current encryption schemes: In addition to using individual photons to transfer encryption keys--which are highly sensitive to interference or monitoring attempts--Navajo changes the keys 10 times every second, making the keys useless to anyone who acquires them. Navajo is comprised of black boxes that produce and read quantum-encrypted signals over a fiber-optic line across a maximum distance of 70 miles. Similar efforts are underway in the United States, Europe, and China--Switzerland's id Quantique has a Navajo-like system in the pilot phase; IBM researchers are investigating how to reduce the size of quantum systems so they can mesh more smoothly with existing computing and communications networks; and Britain's QinetiQ and the Los Alamos National Laboratory are exploring the wireless transmission of quantum keys. Quantum encryption operates on Heisenberg's Uncertainty Principle, which decrees that subatomic particles exist in multiple potential states simultaneously until something interacts with them. Researchers expect to be able to harness these states and interactions to build a quantum computer, which would boast exponentially more power than current supercomputers; Peter Shor of AT&T Labs demonstrated in the 1990s that quantum computers would be able to decrypt any code--except that produced via quantum cryptography.
    Click Here to View Full Article

  • "Gadgets Help Baby Boomers Manage Old Age"
    USA Today (11/17/03) P. 1A; Bayles, Fred

    As the baby boomer generation makes the transition to old age over the next three decades, researchers and marketers are developing and testing gadgets designed to ease their autumn years. The percentage of aged baby boomers expected by 2030, not to mention the anticipated wealth of this segment, is fueling a rush to create and market such devices; meanwhile, the increasing enfeeblement of the elderly every five years after age 65 and growing cutbacks in elderly care establishes a need for such technology. Home-based health monitoring is a major area of research and development: Ideas being investigated include virtual pets that remind people to take their medication and "die" if their owners fail to keep up their regimen. Another concept is a health card that stores and updates its owner's medical data so that any physician or health care provider can instantly access that information; MIT's AgeLab thinks such a device could also be used to help shoppers select foods based on their medical histories. Transportation technologies designed to help the aged are already being offered while others await testing--examples of the former include Cadillac and Lincoln cars that have optional night-vision windshield displays, and examples of the latter include vehicles that can automatically parallel park and "smart" intersections that can warn drivers of stop signs, red lights, or pedestrians via sensors and radio transmitters. Technologies to advance home safety comprise the biggest area of R&D concentration: Efforts in this segment include a sensor-saturated environment at the University of Florida that tracks the movements and habits of residents and alerts family and caretakers when problems crop up; and a smart phone through which residents can control doors, windows, and temperature through vocal commands. "We need answers today for what faces [the baby boomer] generation because we are running out of time to prepare for what's coming," declares AgeLab director Joseph Coughlin.
    Click Here to View Full Article

  • "More Consumers Reach Out to Touch the Screen"
    New York Times (11/17/03) P. A1; Harmon, Amy

    Computers are taking over an greater percentage of day-to-day human interactions as consumers bypass the cashier or ticket agent and pay directly at a kiosk. These machines fitted with touch-screens are taking over at gas pumps, airline ticket counters, subway and train stations, grocery stores, and in the financial sector as ATMs as the public's faith in technology grows. The machines are becoming much easier to use as well, with bright and colored touch-screens replacing monochrome displays and dirtied keypads. Research group International Data predicts the number of retail self-checkout systems will double by the end of this year. The machines, which at their core are customized PCs, herald the "roboticization" of the service sector, say analysts. Stanford University communications professor Clifford Nass says customer surveys show people trust ATMs at their banks to be more accurate than tellers, opposite what surveys in previous years showed. Besides speed and convenience, experts also point to a more worrying reason for self-service--consumers want to avoid human interactions with sometimes disgruntled workers. The results could shield consumers from having to think about how low-wage workers are getting by and allow isolationism, for example. Another worry is that these machines are so flexible and cost-effective that they are displacing a tremendous number of human workers in a wide variety of industries. Although businesses that employ self-service machines insist they are responding to customer demand to improve service, the clear financial benefits of a $10,000 kiosk over a $20,000 cashier are unmistakable; experts also point out that self-service machines are different from other mechanistic assaults on human jobs because they are so widely applicable and the first to be able to take over functions requiring cognitive skills.
    Click Here to View Full Article
    (Access to this site is free; however, first-time visitors must register.)

  • "Software Speeds Calculations on Supercomputers"
    Newswise (11/14/03)

    Software developed by Dhabaleswar Panda of Ohio State University and Pete Wyckoff of the Ohio Supercomputer Center helps accelerate supercomputer calculations by establishing compatibility between message passing interface (MPI) software and InfiniBand technology. The open-source MPI for InfiniBand on VAPI Layer (MVAPICH) software has been downloaded by over 65 organizations worldwide to develop applications: Sandia National Laboratory and Los Alamos National Laboratory have employed MVAPICH to power a 128-node supercomputer and a 256-node supercomputer, respectively. Panda notes that supercomputers were once architectured as highly expensive, large-scale mainframes, but a cluster scheme based on desktop nodes has become more commonplace; however, such cluster configurations have trouble supporting complex scientific visualizations, which is where MVAPICH can help. "At some point, adding nodes to a cluster doesn't make the calculations go any faster, because it introduces communication and synchronization overheads, and researchers have to rely on software to manage communication between nodes effectively," says Panda. "MVAPICH takes that software a step further by connecting it with the emerging InfiniBand network technology." The MVAPICH software supports a Macintosh-based supercomputer from Virginia Tech that is expected to be listed as the third fastest supercomputer in the world at an Nov. 16 conference. Meanwhile, Intel and Mellanox Technologies are using MVAPICH to boost calculations for their TeraFlop-Off-the-Shelf (TOTS) supercomputer, which Panda thinks signifies a new age of commodity systems that will enable research labs and small enterprises to take advantage of supercomputing technology. The National Science Foundation, the Energy Department, and Sandia National Laboratory provided primary funding for MVAPICH research."
    Click Here to View Full Article

  • "Spammers Target Instant Message Users"
    TechNews.com (11/13/03); McGuire, David

    Unsolicited commercial messages are not restricted to email or pop-up ads; now spammers are exploiting instant messaging to annoy users with an even more intrusive form of advertising called "spim." Spim is even more aggravating for users because it can appear at any time, and there is more risk of embarrassment because, unlike email, users cannot check instant messages at their leisure. Although unmasking the culprits behind spim and the methods they use is no easy task, users can take solace in the fact that IM spammers will find it difficult to send such messages in bulk, as AOL, Yahoo!, and other companies are already taking action to ensure that spim never becomes as overwhelming as spam. "I don't think IM spam has become anything on the scale of the problem that regular spam is," says AOL's Andrew Weinstein, whose company employs rate limiting and other kinds of spim-blocking measures. Though IM does not produce a lot of revenue, companies have a vested interest in curbing the appearance of unwanted content such as spim, since IM is often regarded as a gateway service that helps attract customers to paid Internet offerings. Patricia Faley of the Direct Marketing Association reports that her organization intends to adopt an IM marketing policy within the next six months, and says there is little interest among established vendors to use IM for marketing purposes. FTC staff attorney Brian Huseman assures that the commission is keeping tabs on spim, even though consumer complaints have been small. Grant Toomey, a representative of CAN-SPAM Act sponsor Sen. Conrad Burns (R-Mont.), says the congressman is also tracking the spim problem and may take a legislative course of action in 2004.
    Click Here to View Full Article

  • "Romanians Become Latest Tech Rivals for Off-Shore Jobs"
    Wall Street Journal (11/17/03) P. B1; Gomes, Lee

    The eastern European nation of Romania contains the most eager-to-offshore programmers after India, thanks to the heavy concentration of engineering and industrialization implemented throughout the country during its Communist regime. Math is still heavily emphasized among Romanian students, and PCs, broadband, and Internet cafes have established a solid foothold. Romanian IT free-lancers typically land small-scale jobs such as setting up Web sites for small American companies or assisting with game development, though Romanian firms are attempting to gain bigger clients. Romanians have a better chance of getting help-desk work from European firms than Indian or Russian programmers because of their knowledge of European languages. Programming jobs can pay as much as $10 to $20 an hour, which is far more than the average Romanian's wage. Most Romanian programmers cannot be choosy about jobs, since relatively few have acquired high ratings from previous satisfied customers. Programmer Catalin Ionescu of Bucharest says that professionals must gain specialized skills in order to enjoy the luxury of job selection.

  • "ASIMO's Steps Are a Giant Leap for Robotics"
    Seattle Post-Intelligencer (11/14/03); Armstrong, Doree

    One of Honda Motor's advanced humanoid robots will be on display at the Pacific Science Center in Seattle this weekend. The ASIMO (Advanced Step in Innovative Mobility) humanoid robot, which looks like the robots seen in the Steven Spielberg movie "A.I: Artificial Intelligence," has been on loan in the United States as part of a 15-month tour to educate students about science and robotics. The 4-foot-tall, 115-pound ASIMO, which resembles an astronaut in full gear, moves like a human using technology Honda calls prediction motion control that allows the robot to shift its balance as it predicts its next move. ASIMO moves at a maximum speed of 1 mile per hour, and has the ability to perform 26 types of movements involving its head, shoulders, elbows, wrists, groin, knees, and feet. "Our world is full of stairs and sidewalks and uneven surfaces and doorknobs and cupboards, and if we ever want something to function in that world, it has to be designed like a human is designed," says Stephen Keeney, project leader for the ASIMO North American educational tour. Honda, which has been designing ASIMO for 17 years, sees its humanoid robot gaining usage in homes in about 10 years. Honda uses voice commands to control ASIMO, but its researchers are working to integrate visual recognition technology into the humanoid robot. Keeney notes that "it takes a lot of different sciences to create a humanoid robot...mathematics, physics, computer science, mechanical engineering, electrical engineering, even anatomy or physiology."
    Click Here to View Full Article

  • "IT Does Matter"
    CIO (11/10/03); Tanaszi, Margaret

    Discussion on the value of IT in business needs to consider perceived customer value, writes International Data analyst Margaret Tanaszi: End users may be fluid and demanding, but they will consistently reward companies that provide valuable services. Businesses that view their operations in light of providing perceived customer value will never lack ways to harness technology. The Harvard Business Review article "IT Doesn't Matter" by Nicholas Carr accurately points out that IT tools are now ubiquitous and have become part of the base cost of doing business; however, Tanaszi notes that just because technology is widespread and basically the same across companies does not mean that it is not important to business success. In the same way as Carr says technology is now a commodity and no longer key to competitive advantage, many people thought the value of information would be lessened by networking and communications technologies, Tanaszi observes. In fact, many companies now employ knowledge management strategies with the understanding that having information is not the same as using and easily accessing information when needed. Businesses that use information in the best ways have a significant competitive advantage. Tanaszi points out that Carr's argument that many businesses that spend conservatively on IT are successful associates spending with results. Companies that spend less on IT but implement it well are going to perform better than companies whose IT budgets are large but implementation is sloppy. Tanaszi argues that in order to fully take advantage of IT, businesses need to think beyond how IT works in relation to existing business processes and practices; instead, they should think of how they can change business processes and practices to best unlock IT's value--namely delivering perceived customer value. Companies that treat IT as non-strategic will miss out on the opportunity to do business better.
    Click Here to View Full Article

  • "China's Internet Revolution"
    Online Journalism Review (11/13/03); Glaser, Mark

    During a recent interview, Xiao Qiang--the former executive director of Human Rights in China and current head of the Berkeley China Internet Project at the University of California at Berkeley's Graduate School of Journalism--addressed Internet use in China. Specifically, Xiao discussed how people in China are using Internet technology to exchange information and news in new ways. Recently the Chinese government shut down almost 50 percent of the country's Internet cafes, while reportedly nine or more online writers in China have faced arrest recently. Xiao reported that from the earliest phases of Internet development in China, the government began monitoring Internet news Web sites, but he predicted that maintaining such tight control would become more difficult as the Internet becomes increasingly popular among middle-class urbanites. Commenting on Internet technology, Xiao said that "it's started to have input from the bottom up." Though police units are developing in provinces and cities across China to cover the Internet, Xiao said the Internet is still managing to significantly affect Chinese media, with many new foreign-based sites providing news to Chinese users, sometimes by roundabout means. "They depend on what I call human proxies, people who send their friends emails in China to get around the firewall," Xiao explained. Bulletin boards and forums are particularly popular means for such communication, while users also utilize commercial portals, chat rooms, email, and other services. Xiao also discussed how short messaging service--very popular in China--allowed more communication about the SARS outbreak before the government reported on it. He also commented on the Global Internet Freedom Act, self-censorship in China, and the increased popularity of Weblogs.
    Click Here to View Full Article

  • "Cell Computing the Foundation of IBM Prototype"
    SiliconValley.com (11/14/03); Takahashi, Dean

    IBM's prototype Blue Gene/L supercomputer will be the first to employ a cell computing chip architecture, in which operations are split up among dual-processor chips. Whereas most supercomputers have enormous space requirements, the Blue Gene prototype encapsulates 1,024 PowerPC microprocessors on 512 chips in a package the size of a 30-inch television. Blue Gene can currently process 2 trillion instructions per second (2 teraflops), which overtakes standard PC speed by a factor of 5,000. In its final form, Blue Gene will be capable of running at 360 teraflops, and will reside at Lawrence Livermore National Laboratory, where it will be used for a variety of scientific projects. "What we see emerging is a third generation of intelligent systems," declared IBM's Irving Wladawsky-Berger. "This is the first stage, and what is fascinating here is next-generation technology will be based on consumer technologies." Wladawsky-Berger forecasts that future computer systems will be more biologically inspired, while IBM expects such systems to rely on beehive-like chip architectures that function interactively. IBM's more ambitious agenda is to build a petaflop computer that can process 1,000 trillion operations per second, and sell processors in massive volumes. Such processors would be embedded in many devices, including cell phones, game boxes, and supercomputers.
    Click Here to View Full Article

  • "The Virus at 20: Two Decades of Malware"
    silicon.com (11/11/03); Sturgeon, Will

    The twentieth anniversary of the first computer virus, created by U.S. student Fred Cohen as a Unix research project, has established malware as an important--if unfortunate--part of the IT landscape. MessageLabs' Alex Shipp, Computer Associates' Simon Perry, Sophos' Graham Cluley, and Roger Levenhagen of Trend Micro say the spread of personal computing, the Internet, and technical sophistication of viruses and worms have marked two decades of malware development. Cluley cites the first PC virus, Brain, as a significant milestone, as well as Tequila and Concept, which were the first multipartite and document-infecting viruses, respectively. After those, Melissa was the first truly successful email virus, while The Love Bug and Kournikova email viruses established social engineering tactics. TruSecure's Bruce Hughes says the ability of viruses such as Nimda to spread via multiple vectors was a significant advance in malware, while Levenhagen says more recent viruses such as SQL Slammer have shown how fast some malware can spread worldwide--to the point of clogging Internet traffic and even affecting ATM networks. Clearswift's Peter Simpson says hybrid variants have been an important malware milestone, because they allow viruses to accept updates in the field and sometimes operate beneath anti-virus radar. The continuing SoBig Project attacks have also been significant as it signals malware technology joining with illegal activities such as spam, identity theft, and denial-of-service attacks. Over the next 20 years, Perry believes a major war or terrorist attack will include a serious computer-based component. Cluley says security technology is also getting much better and is learning to use the Internet to its advantage, while Shipp sees new, costly technology that can eliminate most security threats but also exclude poorer nations.
    Click Here to View Full Article

  • "Where Is the Real Matrix?"
    Salon.com (11/11/03); Shoham, Shy; Hall, Sam

    The virtual world of connecting brains directly to computers, as seen in "The Matrix" movie trilogy, is not fantasy to some engineers and scientists, write Princeton scientist Shy Shoham and analyst Sam Hall. In fact, since the 1950s, researchers have developed real-life human-computer interfaces, or neuroprostheses, which are medical devices that are designed to connect directly with the human brain, spinal cord, or nerves. Engineers and scientists involved in neuroprostheses view the technology as a way to help people with disabilities see, hear, and live a normal life. Future brain-computer interfaces may include fully immersive virtual interfaces, while such technology today may be the most "futuristic" devices currently under development. Cochlear implants, which involve placing a microphone near the ear to convert sound to weak electrical currents to activate auditory nerve endings for the brain, became the first commercially available neuroprostheses in the United States upon FDA approval in 1984. In addition to making gains in sensory prostheses--technology used to make up for lost and diminished senses--researchers are focusing on motor prostheses to help people regain the use of paralyzed muscles, as well as using the technology to facilitate brain activity that would negate neurological diseases. Some observers believe advances in microelectronic technology, such as the ability to place hundreds of millions of transistors onto a single chip in devices such as cell phones, will be huge for neuroprosthetics in the years to come. However, the FDA approved only eight implantable neuroprosthetic devices during the 1990s. Observers say the current regulatory environment makes it difficult to raise capital for developing such technologies, and add that the reimbursement policies of Medicare underpay for the cost of medical devices.
    Click Here to View Full Article
    (Access to full article available to paying subscribers only.)

  • "Corporate Trademarks and the Future of Domain Disputes"
    TechNewsWorld (11/11/03); Halperin, David

    Cybersquatters took advantage of the rush for domain names from the start, registering domain names associated with established firms and companies in order to make a profit from the sale of such names. However, roughly 80 percent of UDRP cases end with a ruling for the plaintiff. Karl Auerbach, a former member of ICANN's board, calls the UDRP structure "highly biased, in that it induces the UDRP decision-maker to decide in favor of the plaintiff, because the plaintiff is the one who picks the decider." The UDRP allows for the owner of a trademark to lodge a complaint against a respondent for registering a name that duplicates or closely resembles the trademark. If that party, the complainant, can show the respondent has no claim to the trademark and has registered and used it in bad faith, the trademark holder can win judgment. Critics such as Auerbach worry that such a system violates the basic principle of "innocent until proven guilty" and point to concerns over the fact that the complainant may select the venue. "If you have a church or a god or a school or a theater company or whatever, you don't have a trademark in it, and you can't use the UDRP to protect your rights," says Auerbach. He worries about corporate entities using the process against smaller independent entities. Others, such as Francis Curry of the WIPO, disagree--arguing that the UDRP is relevant only when someone is taking advantage of a trademarked name with no legitimate rights to the name. Gurry also contends the system is efficient and inexpensive and therefore useful to small entities working to protect their names.
    "Jury Still Out on E-Voting"
    Federal Computer Week (11/10/03) Vol. 17, No. 39, P. 50; Hardy, Michael

    Despite promises that electronic voting systems are secure, experts remain unconvinced. Critics argue that users of touch-screen voting systems cannot verify that their votes are being accurately registered and tallied because there is no voter-verified paper ballot, while some computer scientists believe elections could be rigged through the exploitation of Diebold software security flaws uncovered by Johns Hopkins University researcher Aviel Rubin. Yet states are still rushing to purchase and deploy these unproven systems out of eagerness to comply with the Help America Vote Act of 2003: The state of Maryland, for example, has opted to implement Diebold machines despite Rubin's warnings, after a second analysis from Science Applications International (SAIC) concluded that many of the flaws Rubin outlined could be solved or lessened through deployment of the system and the election process controls or environment. The SAIC researchers did not, however, certify that the machines are safe, and called for rigorous security measures. Ken Carbullido of Election Systems & Software insists that the security of e-voting products is bolstered by both the products' features and the election processes organized around them. Rice University's Dan Wallach thinks that the state of Maryland's decision to allocate the responsibility for ensuring e-voting security and reliability to election officials and poll workers is a bad strategy, because they are not technology specialists. Wired News reporter Kim Zetter put this assertion to the test by witnessing an e-voting training session in California's Alameda County, only to note a profound lack of security as well as a "cavalier attitude" toward these deficiencies. Rep. Rush Holt (D-N.J.) introduced legislation in May calling for a voter-verified paper ballot to be added to e-voting machines, but the measure has stalled in a House committee with no Republican support, according to Holt staffers.
    Click Here to View Full Article

    To read of ACM's concerns regarding e-voting, visit http://www.acm.org/usacm/Issues/EVoting.htm.

  • "Spam Nation"
    InformationWeek (11/10/03) No. 963, P. 59; Claburn, Thomas

    Twenty-five percent to 60 percent of all email is spam, and an October Pew Internet & American Life Project survey estimates that 70 percent of email users do not like spam. Though national laws such as the Can-Spam Act and state statutes such as the recently enacted California anti-spam law are designed to target bulk commercial emailers, tracking them down and prosecuting them is difficult; for one thing, they often obscure their identities through various techniques and operate outside the United States, beyond the reach of anti-spam enforcement. This tendency to hide also puts spam trackers at a disadvantage for lack of insight into spammers' motivations, notes Brightmail's Francois Lavaste. EPrivacyGroup.com chief privacy officer Ray Everett-Church places spammers into two camps: Naive Internet users who think spamming is a fast route to easy riches and become quickly discouraged, and "professional criminals." Laura Atkins of the Word to the Wise anti-spam software and consulting firm says that spammers are in it for the challenge, while others believe they have the right to market to anyone, regardless of recipients' desires. There are, however, email marketers who take offense at being classified and hounded as spammers: OptInRealBig.com owner Scott Richter argues that his company is legitimate because, unlike spammers, it does not cover up its existence, and it is generating profits as a direct result of email marketing. Richter goes on to say that many people who complain of spam have given marketers permission to send them email without realizing it by registering for prizes at Web sites, for instance. Companies such as CNet, which retain lists of customers for communication and marketing purposes, demonstrate clear value for clients and work closely with ISPs to stay in their good books, says CNet's Markus Mullarkey.
    Click Here to View Full Article

  • "Wireless Mesh Networks Boost Reliability"
    Network World (11/10/03) Vol. 20, No. 45, P. 43; Jordan, Bob

    The decentralized wireless mesh network topology allows intelligence to be distributed across network nodes in a way that supports scalability, reliability, self-configuration, and self-repair, which are distinct advantages over traditional hub-and-spoke networks. Current wireless LAN mesh networks employ standardized 802.11a/b/g, but they can be applied to UltraWideband and other radio-frequency technologies. Network intercommunication in a mesh topology requires the nodes' self-discovery features to ascertain their roles as either wireless device access points, traffic backbones, or both. The individual nodes then use discovery query/response protocols that use no more than 1 percent to 2 percent of available bandwidth to locate neighboring nodes. Node recognition is followed by the measurement of path data such as received signal strength, throughput, error rate, and latency with minimum bandwidth consumption so that each node can choose the optimal route to its neighbors, allowing the highest quality of service to be acquired at any time. Every node maintains an up-to-date list of neighboring nodes and often recomputes optimal routes in order to support network failover whenever nodes are removed from the network. Individual nodes boast self-management, yet the overall network can be managed and configured as a whole from a centralized point. Traffic is secured through encrypted tunnels while standardized security methods such as Advanced Encryption Standard Encryption guarantee that only authenticated wireless devices and nodes are linked and encrypted appropriately.
    Click Here to View Full Article

  • "Now Who's in the Driver's Seat?"
    New Scientist (11/08/03) Vol. 180, No. 2420, P. 28; Graham-Rowe, Duncan

    Intense competition in the automotive industry is pushing the development of drive-by-wire systems, in which steering and braking are directly under computer control so that cars are safer and more fuel efficient. Though drivers will make little distinction between operating conventional cars and drive-by-wire cars, there will be subtle changes, including smoother acceleration, reduction of skidding, and faster corrective responses to tire blowouts. Drive-by-wire components have begun to appear on the road and in showrooms, although fully drive-by-wire vehicles will probably not emerge for consumer use for at least half a decade. One of the barriers to the technology's rollout is safety concerns, which manufacturers plan to address with the incorporation of backup systems, according to David Ward of the Motor Industry Software Reliability Association. However, the trade-off for all the extra computing power and sensory equipment to ensure fault tolerance is a lessening of vehicle performance due to weight gain. The biggest challenge drive-by-wire designers face is anticipating and testing for every possible driver reaction to the car's behavior to avoid loopholes that could allow accidents--an impossible task, says MIT aeronautical software expert Nancy Leveson. Ward thinks that drivers of drive-by-wire vehicles could undergo additional training to mitigate this problem. The question of who is responsible in the event of a drive-by-wire-related accident is also hard to answer, partly because of the difficulty in precisely replicating fault conditions in complex systems.

 
 
[ Archives ] [ Home ]

 
HOME || ABOUT ACM || MEMBERSHIP || PUBLICATIONS || SPECIAL INTEREST GROUPS (SIGs) || EDUCATION || EVENTS & CONFERENCES || AWARDS || CHAPTERS || COMPUTING & PUBLIC POLICY || PRESSROOM