HomeFeedbackJoinShopSearch
Home

ACM TechNews sponsored by Looking for a NEW vehicle? Discover which ones are right for you from over 250 different makes and models. Your unbiased list of vehicles is based on your preferences and years of consumer input.    Looking for a NEW vehicle? Discover which ones are right for you from over 250 different makes and models. Your unbiased list of vehicles is based on your preferences and years of consumer input.
ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either AutoChoice Advisor or ACM. To send comments, please write to [email protected].
Volume 6, Issue 649:  Wednesday, May 26, 2004

  • "Viruses Nip Russia After the Cold War"
    IDG News Service (05/25/04); Blau, John

    The end of the Cold War and the collapse of the Soviet Union have opened Russia's borders to the Internet, which in turn has given rise to massive computer virus infections. Security experts expect things to get worse now that network intrusions and the authoring of viruses are no longer the sole province of politically- or respectability-motivated hobbyists, but a tool for organized crime. One hacker-turned security expert observes that there is money to be made from hacking and virus-writing, while Mi2g Chairman DK Matai points out that "The Mafia, which has been using the Internet as a communication vehicle for some time, is using it increasingly as a resource for carrying out mass identity theft and financial fraud." Russia's economy is an ideal climate for hacking, as highly skilled but cash-strapped Russian tech professionals direct their talent toward scanning corporate networks for security holes, crafting malware for stealing financial data, setting up illegal spam farms by hijacking infected computers, or ransoming companies' livelihood by threatening to launch distributed denial-of-service attacks against their networks or publicize sensitive information online. Another factor is relatively lenient attitudes toward cybercrime in a nation where violent crime is rampant, according to Sergey Bratus of Dartmouth College's Institute for Security Technologies Studies. Also complicating enforcement is the increasingly global nature of cybercrime, which makes its perpetrators difficult to trace, and differing views on cybercrime's definition. Gus Hosein of the London School of Economics and Political Science predicts that "policies will be developed to enhance the investigation of viruses in order to trace virus makers and other perpetrators of cybercrimes, only to see those same powers used for different purposes, such as pursuing copyright crime and 'indecent' communications."
    Click Here to View Full Article

  • "Linux Contributors Face New Rules"
    CNet (05/24/04); Ricciuti, Mike

    The Open Source Development Labs (OSDL) is working to avoid future conflicts over the origin of Linux source code by establishing a system for tracking and documenting changes to the kernel of the operating system. "As Linux becomes more mainstream, and more companies and governments are involved in Linux, there are certain things that they would like to see as part of the documentation, as part of the process," remarked OSDL CEO Stuart Cohen. He assured that the SCO Group's lawsuit against IBM for allegedly embedding proprietary Unix code in Linux had little to do with his group's decision to set up a Developer's Certificate of Origin (DCO). The DCO is designed to guarantee that derivative works, contributing developers, and those who receive submissions and transfer them to the upper levels of the kernel tree unaltered are acknowledged. Cohen said this will be of significant help to future releases, including version 2.7 of the Linux kernel, due out in approximately 12 months. OSDL indicated that the DCO will require all contributors to a specific submission to "sign off" on it prior to its consideration for insertion into the kernel. Linux creator Linus Torvalds sent a message to a Linux discussion group in which he called for a more explicit process to "document not only where a patch comes from...but the path it came through" so that incidents such as the SCO lawsuit can be averted.
    Click Here to View Full Article

  • "The Rise of the Talking Machines"
    Financial Times (05/26/04) P. 11; Budden, Robert

    Machine-to-machine (M2M) communication technology is expected to proliferate dramatically and may eventually be embedded into everyday household appliances. M2M systems are usually employed in one of three sectors: Remote control monitoring, financial transaction processing, and security. Examples of the first application include a vineyard deployment that alerts winegrowers to falling temperatures that could lead to frost damage. M2M technology is also making its way into vending machines, parking meters, and other devices in a scheme that enables consumers to communicate with the devices as well carry out financial transactions via their mobile phones. Security applications for M2M include enhancements to a wide array of devices, including alarms and remote cameras that notify users of intruders through a cell-phone connection. Wavecom's Olivier Beaujard forecasts that the number of globally distributed active M2M modules and modems will surpass 7 million by 2005 and top 20 million the following year. However, mobile operators are unlikely to reap any major revenues from M2M because only a small amount of data traffic will be produced by most M2M devices: "For operators [M2M solutions] are low-churn, long relationships with low cost of ownership," notes Nokia's Mark Dampster. The wide adoption of M2M will cultivate significant financial returns for operators, as the data traffic generated by M2M-augmented home appliances is much larger when counted across millions of households. Key to the success of M2M is how well operators and vendors can persuade businesses that the technology is cost-effective, according to Orange's Mike Newnham; "We need to ensure that [customers] get a return on their investment and we get one as well," he declares.
    Click Here to View Full Article

  • "When Business Imitates Life"
    Business Week (05/25/04); Meyer, Christopher; Davis, Stan

    Just as with the industrial and information economies before it, the molecular economy will mean dramatic changes in the way people live and companies operate, according to the new book, "It's Alive: The Coming Convergence of Information, Biology, and Business," written by business and technology experts Christopher Meyers and Stan Davis. The molecular economy is just reaching the adolescent stage, 50 years after the discovery of the DNA molecule, while better understanding of chemical and biological molecular processes are converging with super-small manufacturing so that scientists and companies can manipulate basic molecular building blocks. The New York Times recently called this new field of science NBIC, short for nanotechnology, biotechnology, information technology, and cognitive sciences. There are a number of similarities between the rise of the information economy and today's development of the molecular economy: Just as the Xerox PARC laboratory pioneered computer science innovations, similar research centers around the world are accomplishing fantastic new feats, such as creating an optical microscope illuminated by a single fluorescent molecule or injecting jellyfish bioluminescence genes into rhesus monkeys. The electronics industry is also pushing the envelope in terms of nanoscale manufacturing capability, and parallels can be drawn between the early development of computers and future "molectech." Today's developments can be compared to the introduction of the transistor, which eventually produced pocket radios, then the microprocessor, and the emergence of the modem. In the same way, our lives and economy will be changed by greater understanding and manipulation of biological molecules and basic matter. A more comprehensive understanding of evolution, especially, will change the way companies are organized and do business.
    Click Here to View Full Article

  • "CIA's Spy Tools Make Maxwell Smart's Look Like Toys"
    USA Today (05/26/04); Maney, Kevin

    Reports about intelligence failures related to Sept. 11, 2001, and the recent scandal surrounding an alleged Iraqi double-agent have done little to demolish the CIA's image as technologically incompetent. CIA Director George Tenet told the 9/11 Commission that information analysis technology will be the agency's salvation, although he estimated that another five years of work must pass "to have the kind of clandestine service our country needs." Clues as to what kind of technology the CIA wants and needs to accomplish this feat can be found in startups funded by In-Q-Tel, the agency's investment arm. One such start-up is Tacit Knowledge Systems, a developer of software that could monitor all CIA agents' outgoing email for clusters of words that describe each agent's knowledge and associations, which could help facilitate more effective information sharing among CIA agents or between the CIA and the FBI. Information written in multiple languages is a tough challenge the CIA is trying to meet through companies such as Language Weaver, which produces a technology that uses a statistical model of machine translation that is reportedly faster and more accurate than the older, grammar-based approach. The CIA is in desperate need of a sorting mechanism for the massive volume of information it must handle, and the solution will probably be found in multiple technologies. MetaCarta's geographical sorting products, Non-Obvious Relationship Awareness technology from Systems Research & Development, and PiXlogic's image sorting techniques are some potential candidates. Dust Networks is an example of one of the CIA's more radical investments: The company is working on minuscule sensors that could presumably be deployed on enemy ground to track troop movements based on vibrations.
    Click Here to View Full Article

  • "Computer Science Luminaries Examine Future Interaction Design for Children"
    ACM SIGCHI (5/26/04)

    Seymour Papert, Marvin Minsky, and Alan Kay will open the Third International Interaction Design and Children Conference (IDC2004) as members of a keynote panel pondering how best to design and implement educational tools for children and what they have learned from past efforts. The annual conference, to be held June 1-3 at the University of Maryland, draws a worldwide audience of designers, educators, and students to exchange information and test new ideas. "The digital cultural revolution is beginning to transform the life and learning of children," says Papert, MIT Lego Chair. IDC 2004 chair Allison Druin explains the focus of the conference will be to understand "how the world of education should change the technologies we make. It is not adequate to merely use computers in the classroom as just another media." The conference is sponsored by ACM's Special Interest Group on Computer-Human Interaction, along with the National Science Foundation, the Institute of Museum and Library Services, Fisher-Price, LeapFrog, and Organa.

    For more information about IDC2004, or to register, visit http://www.IDC2004.org.

  • "Video Tracking Software Enters the Game"
    EE Times (05/25/04); Johnson, R. Colin

    University of Calgary researcher Jeffrey Boyd has developed software that employs automatic segmentation and object recognition technology to monitor sporting events, take note of the highlights, and represent them as a schematic or diagram that can be viewed on a cell phone or other small handheld. Boyd's Camera Markup Language is programmed to "watch" a video stream while describing its content as an continuous XML document, an operation it shares with the standard MPEG-7 format; however, Boyd's software establishes two-way communication between the video server and the camera so that the server can instruct the camera to carry out actions such as "follow-the-ball." "Whereas MPEG-7 is primarily a one-way description--you have some video, you describe it and that's it--we were looking more at interacting with the camera," explains Boyd. "As the video is being produced we can tell the camera to do different things." The client/server architecture facilitates the processing of video and its XML documents, object description, and the generation of both television video and graphical representations for cell phones. The software is only capable of tracking discrete objects, but Boyd plans to enable the software to furnish more intricate descriptions that recognize both objects and their associated activities. Boyd has built a demo using a miniature hockey rink with plastic players, and he expects to set up a testbed in a real hockey rink by September. Other potential applications for Camera Markup Language include security surveillance, while in the meantime the software is employed as a traffic-monitoring tool in Calgary.
    Click Here to View Full Article

  • "Waving Goodbye to the Blue Screen"
    IST Results (05/26/04)

    Information Society Technologies' DBench project administrated a series of dependability benchmarking tests on three Microsoft Windows systems--Windows 2000, Windows NT4, and Windows XP--and the results were published in two reports. DBench coordinator Karama Kanoun says testing focused on the application layer to compare the systems' anomalous behavior: The setup consisted of a computer running an operating system under test and another machine running the workload, both of which were connected to a remote Benchmark Controller that performed diagnosis and data collection whenever the OS suffered an aberration. Kanoun says the dependability benchmark deals with the complementary metrics of robustness, OS reaction time, and OS restart time, which collectively describe how a system behaves when application software flaws are present. The DBench coordinator says the test results demonstrated similar behavior between the three systems in terms of robustness, while OS reaction and restart times showed a clear divergence. The OS with the shortest reaction and restart times was XP, while Windows NT4 had the second-shortest times and Windows 2000 the third-shortest times. Kanoun is hopeful that the results of the experiment will help impel the establishment of international dependability benchmarking standardization entities, perhaps within half a decade. She says the DBench results have spurred companies such as IBM and Intel to conduct dependability benchmarking on Web and e-commerce applications, while Linux systems are also being explored by DBench partners.
    Click Here to View Full Article

  • "When It Comes to Robots, Japan Has a Leg Up on Rivals"
    Asahi Shimbun (05/25/04)

    Japanese robots, which tend to follow a humanoid form factor, are much more appealing to consumers than European or American machines, which are primarily utilitarian devices. Japan's Patent Office estimates that the number of applications for robot technologies submitted by Japanese companies between 1990 and 1999 was 14,500, compared to 1,900 for Europe and 1,000 for the United States. Planned to make its market debut next year is Mitsubishi Heavy Industries' Wakamaru, a 1-meter-high, 30-kilogram humanoid bot designed to watch the home for intruders and take care of the elderly and invalids: Its security technology includes a camera that can be accessed by mobile phone, while in its caretaker mode Wakamaru can contact people by phone and email if the person it is monitoring is unresponsive. In addition, Wakamaru can speak and is programmed to understand 10,000 spoken words. Entertainment is another area where Japanese robots are being designed to excel: Toyota unveiled prototype mechanical trumpet players in March and is prepping an automated orchestra for the 2005 World Exposition in Aichi Prefecture. An official at the Japan Robot Association reports that U.S. firms lead the rest of the world in terms of software programs used in robotic systems, while Japan's primary area of excellence is the "physical" technology that gives robots the ability to move and manipulate objects. Humanoid robot developers face some formidable technological challenges: Honda's bipedal Asimo robot, for instance, cannot yet traverse uneven surfaces, and its walking and stair-climbing ability are preprogrammed, not spontaneous, actions. Robotics expert Takeo Kanade says government assistance will be critical to the continued development of robot technology.
    Click Here to View Full Article

  • "Web Viewed As in Baby Stage of Its Development"
    Investor's Business Daily (05/25/04) P. A4; Tsuruoka, Doug

    IBM chief Internet scientist Stuart Feldman says less than 10 percent of the Internet has been "formed" in the first 10-plus years of its life, while the next few decades will see the Net's computing and communications capability reaching virtually infinite levels and sparking dramatic changes in practically every type of industry. Key to the Net's growth is the establishment of open, flexible standards, and Feldman foresees the convergence of four new standards--Web services, grid computing, "semantic" Web, and corporate data-sharing--that will help facilitate the next stage of the Internet's development. In the next five years, Feldman predicts that such standards will enable people to perform the myriad chores involved in moving from one part of the country to another in a vastly shorter amount of time, for example. Ten years out, Feldman believes it will be possible to employ the Web to track and monitor useful things through networked sensors that will be cheap thanks to nanotechnology. Examples include remote health monitoring and diagnosis. The IBM researcher projects that the Net will usher in major changes to the enterprise, specifically a shift in the meaning of industrial sectors. Feldman lists the airline industry as an example, explaining that in two decades, "it will be harder to say if airlines are the people that fly planes, or those who do the computer processing that make the flight possible, or the people that market these flights."

  • "RPI Study Eyes Sick Computers"
    Associated Press (05/25/04); Hill, Michael

    The National Science Foundation is funding a project at Rensselaer Polytechnic Institute that probes the parallels between biological virus and computer virus epidemics in order to find ways to obstruct the latter. For instance, malware's infection mechanism often takes the form of seemingly innocent emails with seductive subject lines, in much the same way that disease bacteria can invade cells by appearing harmless. NSF grant recipient and RPI professor Biplab Sikdar notes that certain viral infections and computer virus outbreaks follow similar patterns: The spread of highly contagious diseases characterized by short incubation periods usually begins with a small infected population before skyrocketing exponentially, reaching a peak, and fading away at a more gradual rate. Sikdar postulates that routers could be programmed to identify sudden protracted increases in instability and other factors as signs of cyberattacks, and then isolate the virus. The RPI professor believes this measure could eliminate the need for computers with antivirus software to update their programs, and even shield computers that lack virus protection. Symantec senior research director Steve Trilling points out that a lot of recent computer security research is focused on behavior-based threat identification instead of reliance on a database of known threats. Vincent Gullotto, director of Network Associates' McAfee Anti-Virus Emergency Response Team, is skeptical that drawing similarities between biology and the Internet will yield effective antivirus measures. Sikdar's five-year NSF grant also covers research into the life expectancies of wireless networks and how minor router bugs can lead to more complex problems.
    Click Here to View Full Article

  • "Can Linux Take Over the Desktop?"
    Wall Street Journal (05/24/04) P. R1; Bulkeley, William M.

    Linux developers and industry backers say the time is right for Linux on the desktop, given price advantages and increasing compatibility with Windows applications. Linux can be obtained for free and requires less computer processor and memory resources than Microsoft Windows, and Linux proponents see a huge opportunity in 2006 when the Longhorn Windows version comes out. At that time, organizations that do not want to upgrade their hardware for Longhorn will decide instead to use Linux, according to advocates. IDC researcher Al Gillen says Linux should have 6 percent of the desktop market by 2007, up from 2.7 percent in 2002. Xandros chief technology officer Fredrick Berenstein says one hotel customer equipped its 150 reservation clerks with Linux operating systems for just $5,100--as opposed to a Microsoft upgrade that would have cost more than $20,000 in software and $115,000 in hardware costs. The open-source operating system has gained support from a number of major IT vendors recently, including Hewlett-Packard, which has begun shipping Linux PCs; Sun Microsystems, which promotes the Open Office suite of desktop applications; and IBM, which is looking into moving its 320,000 employees to Linux desktops. Novell has also become a Linux powerhouse after acquiring Linux desktop firm Ximian and distributor SuSE Linux, while Microsoft has responded to the Linux threat by introducing more flexible pricing schemes and making view-only modules for Word and PowerPoint applications, available on the Windows XP operating system. Cap Gemini chief technologist John Parkinson says there are still significant hidden costs with Linux, including user familiarity and the host of third-part applications using Microsoft Excel macros, which can cause incompatibility problems with Linux.

  • "Group Dynamics Play Out in VR"
    Technology Research News (05/26/04); Smalley, Eric

    Kyoto University researcher Hideyuki Nakanishi has created software that enables programmers to construct virtual spaces modeled after actual locations where software agents and large groups of people represented by avatars can interact. Developers crafting applications for the software, dubbed FreeWalk, can map photos of real environments onto 3D models to build the virtual space. Agents and avatars are represented as humans, who inform their interactions on each other and exhibit group behavior through lip movement, gestures, and other social cues; a user's words are translated into text by speech recognition software, and agents and avatars produce vocalizations via speech synthesis software. The server running the simulation does not interact with client systems except to insert or remove agents and avatars, while a peer-to-peer architecture is employed to transmit voice, posture, and gesture cues directly between clients. Nakanishi says the system frees up programmers to focus on high-level behavior by programming agents and avatars with basic actions such as walking and collision avoidance. By rendering the virtual environment realistically, FreeWalk taps into people's inclination to respond to computer-generated spaces and humans as if they were real. Nakanishi has developed a demo system for simulating disaster situations with actual social interactions as a cost- and labor-saving technique for carrying out evacuation drills; an upgraded version will be more detailed, adding elements such as fire and smoke to make the disaster model seem even more accurate. The researcher explains that real-world human behaviors must be recorded, examined, and simulated to enhance the environment's realism, while the tool's practical applications could be five to 10 years away.
    Click Here to View Full Article

  • "How Are Script Kiddies Outwitting I.T. Security Experts?"
    NewsFactor Network (05/19/04); Valentine, Lisa

    Teenage virus writers are known as "script kiddies," and are having an effect on the IT industry, but network security experts and antivirus vendors say their impact is not as great as is believed--most of them are not very good at virus writing. Even badly-written viruses require corporate users to spend time downloading virus updates, but in addition to causing nuisance, script kiddies serve antivirus vendors by finding vulnerabilities for which the vendors must then write protections. Gartner vice president Richard Stiennon notes that this makes things more difficult for professional hackers who would prefer to keep the vulnerabilities unknown. He adds that since so few hackers are caught, it is hard to tell how many viruses are written by professionals. Trend Micro director David Perry says that most script kiddies' viruses never infect computers--they send them directly to antivirus companies to go on detection lists, about which the teens can then brag. These are called "zoo" viruses because they are never released "into the wild," and make up approximately 74,000 of the 75,000 known viruses. Another group of viruses are "intended viruses" that are so poorly written they do not function; virus-protection firms still create defenses against these attacks should they be fixed in the future. Antivirus vendors are improving their ability to detect viruses before they hit, even while the capabilities of virus toolkits improves.
    Click Here to View Full Article

  • "Technology Advances Take 3D Displays to the Masses"
    Opto & Laser Europe (05/04); Van den Berg, Rob

    Now that 3D display technologies have progressed to the point where 3D images can be viewed without special eyewear, the market is showing signs of maturation. Sharp makes stereoscopic liquid crystal displays (LCDs) that have been incorporated into cell phones and notebook computers. Sharp Laboratories Europe's Ian Thompson says the notebook LCD can switch between 2D and 3D viewing modes with no loss of image brightness thanks to two LCD panels, a front panel that contains a set of image pixels for each eye and a "parallax barrier" that beams two slightly different images into the user's eyes; the viewer has to be positioned at a certain angle to optimally see the 3D effect. Potential applications for the Sharp displays include medical imaging and gaming, according to Thompson. So that users' viewing angles are not as limited, StereoGraphics has developed the SynthaGram, a multi-view 3D display that splits images into nine views rather than a pair for the left and right eyes, and that features five viewing zones where the 3D effect can be perceived. Meanwhile, Actuality Systems' Perspecta Spatial 3D System displays multi-angle 3D views of images in a transparent hemisphere. The 3D image is built out of 2D cross-section images focused onto a spinning screen by a stationary projector employing an array of mirrors. "A group of three mirrors relays the imagery to the rotating screen in a manner that ensures accurate focus regardless of the screen's angle," explains Perspecta inventor Gregg Favalora, who thinks the tool could be used for security, medical, and pharmaceutical applications.
    Click Here to View Full Article

  • "Laying Siege to the Grid"
    New Scientist (05/22/04) Vol. 182, No. 2448, P. 28; Cohen, David

    Ever since the debut of the SETI@home project several years ago, hackers have begun harnessing distributed computing to attack Web targets, gather sensitive information, and send spam; now, hackers are targeting supercomputer grids such as those run at national laboratories and university computing centers. A group of academic supercomputers were compromised in April and taken offline for several days, an incident Counterpane Internet Security expert Bruce Schneier says was most likely hackers flaunting their capabilities. Breaking into supercomputers requires special knowledge of grid systems and is much more complex than breaking into Windows or Linux machines. Although the April incidents did not lead to major damage, experts warn hackers could create online havoc with supercomputers, which are much easier to control than distributed networks of home PCs. With massive amounts of processing power, hackers would be able to figure out passwords from cryptographic hashes, operate optimized illegal file-sharing networks, or even try to crack digital rights management schemes protecting CDs and DVDs. In March 2003, the University of Glasgow's ScotGrid network was taken over by hackers who used it to steal more passwords; administrators were alerted to the break-in only because the temperature in the processing room was getting too hot with the workload. Since then, university officials have implemented a digital certificate system that limits the capabilities of users and makes them easier to manage. The ScotGrid hackers, however, avoided hacking the certificate system since they stole the password of a remote user from Geneva's CERN particle physics laboratory.

  • "Trumping Tape"
    Computerworld (05/24/04) Vol. 32, No. 21, P. 28; Mearian, Lucas

    Certain applications that rely on robotic tape autoloaders for storing large amounts of archival information could switch to disk-based backup systems using massive arrays of idle disks (MAID) technology. MAID systems employ clusters of Advanced Technology Attachment (ATA) disk drives that attempt to extend media life by powering down when inactive, and spinning up only to read or write data. This lowers power consumption and reduces heat, so ATA drives can be packed closer together; they also provide faster access to stored data than tape does. Copan Systems says MAID arrays can periodically power up all drives to keep the mechanics lubricated and avoid problems stemming from disks that are kept idle for prolonged periods. One potential customer of Copan's 200T MAID array is Yahoo!, which currently stores archival tapes underground; Yahoo! architect for storage systems Steve Curry wants to have a MAID array set up at the backup facility and use a Fibre Channel or Fibre-Channel over-IP connection to archive to it directly. Horison Information Strategies President Fred Moore comments that digital tape libraries have a lower cost per gigabyte than MAID arrays, while their ruggedness and portability are also advantages. Kevin Hsu of MAID vendor Exavio counters that MAID is a cheaper option in terms of total cost of ownership. Exavio and Copan are developing portable MAID systems, but IDC analyst Robert Amatruda warns that "You drop some of that stuff and there could be data integrity issues."
    Click Here to View Full Article

  • "The Inevitability of Blade"
    CIO (05/15/04) Vol. 17, No. 15, P. 68; Edwards, John

    Space-saving, cost-cutting blade servers are becoming more and more appealing to enterprises drawn not only to the technology's lower power consumption, reliability, and streamlined maintenance and administration, but its track record. "People have had enough experience, at least the early adopters have, that they can see examples of how blades can really work effectively to help cut costs," notes Rebecca Wettemann of Nucleus Research. Arizona State University's Mars Space Flight Facility employs a BladeRack System from RackSaver to convert thermal emission imaging system data collated by satellites into detailed images, which were used to find the most suitable landing sites for the Martian rovers last January. Continental Airlines currently uses two dozen Hewlett-Packard ProLiant BL40p blades as Web servers, which enable customers to view, price, and book flights while also delivering scalable performance, space savings, and simplified management to the airline. Blade servers' drawbacks include heat generation, which often requires the installation of enhanced cooling systems; incompatibility between hardware; and limited software selection. Intense competition between blade providers is driving down prices on the one hand, but on the other is impeding standardization, according to Continental systems technology VP Bob Edwards, who thinks enterprises should purchase blade server technology to handle a particular, immediate application. Another area of concern is improved blade management, which is being complicated by the fact that customers often get management tools from the hardware vendors. Despite these problems, blade servers could grow even more popular with the emergence of "diskless blades" and grid computing, which is regarded by many as a complementary technology.
    Click Here to View Full Article

  • "Robots and the Rest of Us"
    Wired (05/04) Vol. 12, No. 5, P. 116; Sterling, Bruce

    The retirees who populate San Remo on the Italian Riviera could very well benefit from some mechanical assistance. But roboticists attending the First International Symposium on Roboethics in the resort town in January were more concerned with whether machines should have an ethical code, writes Bruce Sterling. The former mansion of Alfred Nobel was an ironic setting for the gathering because the man who contributed so much to science was also a notorious arms dealer. Thinkers have envisioned robots that would act like humans for as far back as 80 years, but technology experts today know that automatons pose some serious challenges to humankind. The use of autonomous weapons in wars, and the potential impact of the technology to augment the brain, the physical body, and our social reality all raise ethical questions about robots. Sony's Qrio is technically ready, but the robot shaped like a human has no ethics, cannot reason, and knows and cares about nothing. Such a robot could be programmed to go on a killing spree, set fires to buildings, and detonate bombs in crowded areas as it shouts political messages. Man continues to pursue his technological buddy, but the question remains whether we can get a robot to behave when we are unable to do so ourselves, writes the author.
    Click Here to View Full Article

  • "Retooling the Global Positioning System"
    Scientific American (05/04) Vol. 290, No. 5, P. 90; Enge, Per

    As Global Positioning System (GPS) technology currently stands, civilian users can perform geolocation to within five to 10 meters, while military users who employ more expensive equipment can track their location to within five meters--or half a meter with differential GPS (D-GPS). Civilian users are mostly restricted to the L1 frequency band, as L2 satellite signals can be disrupted by lower satellite altitudes or even minor objects. Beginning next year, GPS satellites will start to broadcast new signals (two for the military and one for civilians) designed to make GPS services even more robust and eliminate errors caused by ionospheric particles to refine geopositioning accuracy. This will be followed about three years later by additional civilian signals in the L5 frequency band--with four times as much power as current signals--emitted by more sophisticated satellites. D-GPS users will also gain advantages with multiple frequencies, and sharpen the accuracy of geopositioning to a range of 30 to 50 centimeters. GPS integrity machines will ensure the reliability of the technology by supplying real-time, legitimate error bounds. GPS transmissions will be further inoculated against radio-frequency interference through implementations such as the FAA's wide-area augmentation system, while smart antennas that can selectively direct and receive broadcasts and integration with TV and mobile phone networks are also on the way. Advanced civilian services offered by these various upgrades should benefit the farming, mining, transportation, telecommunications, electric power, mapping, and construction industries, while military gains will include automatic landings of aircraft in zero-visibility conditions.
    Click Here to View Full Article
    (Access to the full article is available to paid subscribers only. Option to purchase current issue also available.)


 
    [ Archives ]  [ Home ]

 
HOME || ABOUT ACM || MEMBERSHIP || PUBLICATIONS || SPECIAL INTEREST GROUPS (SIGs) || EDUCATION || EVENTS & CONFERENCES || AWARDS || CHAPTERS || COMPUTING & PUBLIC POLICY || PRESSROOM