HomeFeedbackJoinShopSearch
Home

ACM TechNews sponsored by Looking for a NEW vehicle? Discover which ones are right for you from over 250 different makes and models. Your unbiased list of vehicles is based on your preferences and years of consumer input.    Looking for a NEW vehicle? Discover which ones are right for you from over 250 different makes and models. Your unbiased list of vehicles is based on your preferences and years of consumer input.
ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either AutoChoice Advisor or ACM. To send comments, please write to [email protected].
Volume 6, Issue 732: Friday, December 17, 2004

  • "P2P Battle Reaches FTC"
    Wired News (12/16/04); Grebb, Michael

    The Federal Trade Commission officially stepped in yesterday to mediate a contentious battle between copyright owners and software companies by hosting a two-day session on peer-to-peer (P2P) software, although the institution of federal P2P marketing regulations is not a sure thing at this point. Representatives of content interests claimed that P2P companies' chief source of revenue comes from enabling consumers to pirate copyrighted content, and argued that their goal is to instill responsibility in P2P networks. Stanley Pierre-Louis with the Recording Industry Association of America said that 99 percent of P2P network traffic involves copyrighted works, and accused the networks of intentionally taking no action against such piracy, thus "offloading" accountability to consumers. He also noted that the issue could be better illuminated with the Supreme Court's recent decision to examine the case of Grokster. P2P company representatives, meanwhile, charged that industry groups have been deliberately feeding the FTC misinformation about P2P technology. P2P United executive director Adam Eisgrau said members of his organization intend to post online warnings that purchasing premium versions of P2P software does not give the user the right to download copyrighted works. P2P United earlier announced a "cybersafety" initiative to disseminate consumer advisories on copyright-infringement liability, data security, spyware, malware, and pornography. James Miller, a lobbyist working for the RIAA, said the FTC should develop specific regulations for P2P companies to guarantee that consumers are aware of the alleged risks, require "simple filters" to halt copyright file-trading, install trade-regulation rules encompassing P2P software claims, and sue bad P2P actors that engage in inequitable trade practices.
    Click Here to View Full Article

  • "Is the Internet Truly Global?"
    CNet (11/15/04); Chai, Winston

    Methods to input Web addresses have not kept pace with the swelling volume of multilingual material on the Internet. The Roman alphanumeric script, or ASCII characters, upon which the Internet rests can be a burden for many non-English-speaking users who have few options apart from memorizing numerical IP addresses or the English spellings of Web sites. This problem has sparked a debate among internationalized domain name (IDN) advocates as to whether teaching English to non-English-speaking Web users or refining the domain name system to accept multiple languages is the simpler solution. Initiatives to realize IDNs began as far back as 1996, but it was only last March that ICANN released IDN standards developed by the Internet Engineering Task Force (IETF). The standards map out how vernacular characters can be converted to Unicode characters that are likewise encoded in ASCII so as to keep the Internet's infrastructure stable; the Unicode system accommodates about 40 languages including Russian, Japanese, Chinese, and Korean. VeriSign and I-dns.net allow IDNs that reportedly comply with the IETF standards to be registered, and users are required to set up plug-ins before they can enter native characters in the address bar. Foreign IT vendors are pressing ahead on language-localization programs, but top-level IDN deployment and educational efforts have been infrequent, indicating a shortage of enthusiasm. With the online population expected to total 1 billion in 2005, accelerating IDN implementation could help drive the next stage of the Internet's expansion and spawn significant opportunities in areas such as e-commerce and e-government.
    Click Here to View Full Article

  • "Ubiquitous Computing Research Spreads"
    InternetNews.com (12/15/04); Singer, Michael

    The Palo Alto Research Center (PARC) and Fujitsu have joined forces in a multi-year collaborative effort to advance ubiquitous computing. The ubiquitous computing movement aims to unobtrusively network nearly any device or object into a single, universally accessible and always available Internet architecture by integrating current network technologies, wireless computing, Internet capability, voice recognition, and artificial intelligence. Fujitsu and PARC say they will concentrate on a "meta standard" interconnect technology that establishes interoperability between different devices on the network. The partners will also focus on developing less complicated and more secure wireless networking, improved ad-hoc sensor network technology, simplified software architecture for compatibility, and sophisticated data visualization technologies. "Taking advantage of these strengths, we will be able to make our cutting-edge technology concepts and visions a reality," declared PARC President Mark Bernstein. The partners stated that social science will also be applied to spawn new business-to-business and business-to-customer opportunities. Research areas expected to play key roles in plans to create simple, secure technologies include protecting personal data in global electronic health care systems; networking intelligent transportation systems; connecting businesses and consumers through personalized information services; and enhancing local disaster-response systems by enabling data networks to reconfigure and locally adapt to changes.
    Click Here to View Full Article

  • "Hidden Agenda"
    The Engineer (12/09/04); Fisher, Richard

    Supporters of the ubiquitous computing movement envision an environment in which households, workplaces, and cars are all networked by hidden computer intelligence, thus ushering in the "third wave" of technology predicted by Xerox PARC researcher Mark Weiser in the 1990s. However, critics are uncertain that this wave can be achieved because of engineering challenges--challenges that companies are ignoring while hyping unrealistic expectations, argues Cambridge University professor Andy Hopper. He reports that "The field is working on science fiction ideas but few people are concentrating on concrete, engineering-based projects that deliver something reliable, scalable and dependable, and that interprets information correctly." Size and power issues prevent current technology from delivering the context awareness component so vital to ambient intelligence, so the development of inexpensive, embedded wireless sensors, actuators, and displays is essential. Sensor networks need to be dramatically enhanced, given the heavy processing workload of tracking people in three dimensions by filtering out undesirable data. And sensor and actuator networks are expected to be in continuous operation, which calls for cheap, reliable sensor nodes that can either siphon power from their surroundings or maintain enough power until they are replaced. Meanwhile, Giles Lane with U.K. think-tank Proboscis says mapping the environment to integrate with sensor data is a tough challenge, given that the world is messier and more complex than researchers tend to think. Wireless data transmission is still highly susceptible to interference, and signals need to span disparate networks without interruption for ambient intelligence to be truly realized.
    Click Here to View Full Article

  • "Research at Penn State McKeesport Focuses on Human-Web Interaction"
    Penn State Live (12/16/04)

    Penn State McKeesport professor Guangfeng Song is working to ease Web navigation by enhancing Web browsers with the ability to record and familiarize themselves with individual users' activities in order to specifically tailor user experience. Song is researching the development of an active learning Web browser that notes the user's behavior, analyzes the data, and retains that experience for later and/or shared use. Such a browser can convert the history of past Web use into written instructions for later use, thus sparing users the burden of recalling how they previously interacted with the Web when they wish to return to a site of interest. Collaborative research could also be simplified with the browser memory, which would allow individuals to better communicate with their partners on how they interact online. Song is pondering how his work can facilitate the recognition of similar features or characteristics in Web sites, and he hopes to learn what blend of elements allows an individual to detect these similarities by studying human perception of Web site design. Song is also analyzing how Web pages are segmented: Effective user experience documentation is beyond the capabilities of the Web's current architecture, and Song thinks that improvements to Web site design will enable the computer to aggregate written instructions on earlier Web-site experience with less difficulty. Personal privacy is a key issue in such aggregation, and the deployment of human-computer interaction will hinge on whether this issue is adequately addressed for Web users. As a doctoral graduate of Purdue University, Song wrote a dissertation entitled "A Method to Reuse Web Browsing Experience to Enhance Web Information Retrieval" that formed the basis of his current work at Penn State.
    Click Here to View Full Article

  • "Tomorrow's Chips, Naturally"
    IST Results (12/16/04)

    Researchers in the IST-funded POEtic (phylogenesis, ontogenesis, epigenesis) project have recently demonstrated an autonomous computing systems platform that digitally imitates the characteristics of organic tissues. Project coordinator Juan-Manuel Moreno explains that POEtic chips bring together for the first time three self-organizing biology models--development, learning, and evolution--on a single piece of hardware. The first POEtic chips were equipped with a special microprocessor that runs evolutionary algorithms, along with a basic programmable unit. Moreno compares the unit's electronic substrate to an organic molecule composed of cells, each capable of communicating with its neighbors via bi-directional channels and with the environment via sensors and actuators, thus performing a function. The individual cells share the same fundamental structure, but they can assume different functionalities thanks to a three-tiered architecture composed of a genotype plane that describes the chip in much the same way a genome describes an organism; a configuration plane that converts the genome into a configuration string; and a phenotype plane whose processing unit is directly controlled by the configuration string. Moreno says tests demonstrated the viability of hardware that is adaptive and dynamic: "They are adaptive because we can modify the system's basic parameters and structure," he notes. "And they are dynamic because these changes can be done autonomously and in real time." Eighty final POEtic chips will be delivered for testing early next year, and Moreno is confident that the tests will establish the chips' evolutionary and learning capabilities; he concludes that the next project will focus on using complex algorithms to make the chips more dynamic and supportive of analog functionalities.
    Click Here to View Full Article

  • "Thought Powers Computer"
    Seattle Post-Intelligencer (12/16/04); Paulson, Tom

    Scientists in Seattle, Wash., have developed a system that enables 19-year-old epilepsy sufferer Tristan Lundemo to play a video game using mental impulses. University of Washington neurosurgeon Dr. Jeff Ojemann and physics and computer science graduate student Kai Miller are studying how people can learn to operate devices using brainpower. Some 72 electrodes were implanted in Lundemo's skull, not within the brain tissue itself but on the surface of the brain; the electrodes are wired to a computer, allowing Lundemo to control the movement of a cursor on a screen so that he can engage in a game of Pong by thought. The electrode array is also used to record the patient's epileptic seizures. "It's a two-way learning process," says Miller. "The computer is adapting to him just as he is adapting to the computer." Lundemo also mastered the operational principles of the game quickly, and Miller and Ojemann think their approach may have played a role. Ojemann notes that the regions in Lundemo's brain have shrunk in size concurrent with the patient's increasing concentration. Miller and Ojemann say the brain-computer interface discipline is a small but growing field, whose initial goal is to develop technology that can make paralytics and amputees more independent. The researchers are collaborating on the study with Dr. Gerwin Schalk of New York's Wadsworth Center.
    Click Here to View Full Article

  • "World Wide Web Consortium Issues 'Architecture of the World Wide Web, Volume One' as a W3C Recommendation"
    Business Wire (12/15/04)

    The architectural principles of the World Wide Web's operations documented by the World Wide Web Consortium's (W3C) eight-member Technical Architecture Group (TAG) have been published as a W3C Recommendation entitled "Architecture of the World Wide Web, Volume One." TAG co-Chair and W3C director Tim Berners-Lee remarks that all TAG members have contributed to the Web's design, noting that in the Architecture document they demonstrate what Web properties must be retained when developing new technological innovations. "They notice where the current systems don't work well, and as a result show weakness," he says. The TAG compiled Web design issues and discussed them in an open forum, thus recording and elucidating long-enduring, widely deployed principles that future generations of developers will need to refer to in order to ensure the continued evolution of the Internet. The first volume of the Web Architecture promotes cutting-edge technology, validating widely accepted, well-understood precepts whose viability has been practically established. Furthermore, the TAG is monitoring principles currently undergoing testing in rapidly maturing sectors, and subsequent TAG publications will use lessons learned from meshing Web services, the mobile Web, and the Semantic Web to add to Volume One. The W3C Recommendation represents the first time that the Web's general design principles have been coherently outlined in a single document by a group of established authorities, and reviewed in detail by the Web community.

  • "UC Berkeley Researchers Field Testing Low-Altitude Robo-copters"
    UC Berkeley News (12/15/04); Yang, Sarah

    The UC Berkeley Aerial Robot (BEAR) group has successfully test-flown model helicopters that demonstrate cutting-edge autonomous flight and obstacle-avoidance technology. "Our BEAR group is the first to successfully develop a system where autonomous helicopters can detect obstacles, stationary or moving, and recompute their course in real time to reach the original target destination," notes BEAR research engineer David Hyunchul Shim. This breakthrough is a significant step toward unmanned aerial vehicles that can fly safely through urban or rural environments, and the Defense Advanced Research Projects Agency is supporting the BEAR group's work as part of its Unmanned Combat Armed Rotocraft program for developing low altitude autonomous flight capabilities. Each model helicopter is equipped with a pair of onboard computers running on a QNX operating system that supports real-time computation; inertial navigation and global positioning systems for maintaining the vehicle's stability; wireless modems and Ethernet systems to effect communication with ground-based computers; and laser scanners to survey terrain in three dimensions. The BEAR researchers have also advanced the use of computer vision in flight and landing tests. The collision avoidance technology the group utilizes employs nonlinear model predictive control principles, and BEAR researchers note that the addition of a sensor could help reduce airplane accidents. Potential civilian applications for the research include firefighting, search-and-rescue, power line inspection, and monitoring for pests in agricultural installations. In addition, the BEAR group has developed and successfully tested battery-powered robotic helicopters, which Shim says could find use in long missions because they can be automatically recharged at solar energy stations.
    Click Here to View Full Article

  • "Smart Dust Advances in Russia"
    Gateway2Russia (12/16/04); Robinson, Bill; Starkell, Natasha

    Russian engineers are working to develop smart dust technologies in conjunction with global IT partners such as IBM and Hewlett-Packard in an effort to become a leader in wireless sensor networks. Smart dust technology is still in its infancy, but the foundations are emerging for a distributed-sensing and intelligent network that would dramatically enhance otherwise mundane services. When cars and fueling stations are equipped with smart dust sensors, for example, it is conceivable that drivers would have automatic recommendations to change their oil or increase tire pressure, when needed. The major issues facing smart dust development are cost, standardization, hardware, software, and business applications. In terms of application and fulfilling marketplace needs, businesses around the world are starting to deploy RFID in their supply chain operations, and smart dust would take these capabilities further and enable new services that affect the customer, such as the "silent commerce" concept in the fueling station example. "Smart dust is...a distributed computing environment with a lot of artificial intelligence and all these representative agents running around doing your work for you," says Vasiliy Suvorov, a top scientist with IBS Group's Luxoft Labs in Russia. Software plays an increasingly important role in technological development, as Suvorov notes that today's hardware is more software-defined. Nano-crystals could enable revolutionary new applications such as smart dust nodes small enough to be injected into the bloodstream, says Ukrainian Academy of Science scientist Alla Klimovskaya, who is developing such crystals.
    Click Here to View Full Article

  • "A Land of Wasted Web Opportunity"
    Age (AU) (12/14/04); Turner, Adam

    Australia can exert more influence over the development of the Web, and not just use the standards that U.S. interests fight over, said Ivan Herman, head of the World Wide Web Consortium (W3C) international office, during a conference at the Distributed Systems Technology Center in Brisbane this week. Ideally, any organization can voice their wishes with the W3C. Liz Armstrong, head of the W3C Australian Office and DSTC director of technology transfer and special projects, said having only five Australian members is a problem. "I think a lot of organizations don't realize they could join the W3C and participate in the development of recommendations for the Web and its future," she noted. Herman said Australia has an opportunity to shape the direction of the semantic Web, in which computers can automatically share data such as stock prices, plane routes, and GPS coordinates. The structure of the data must be improved to offer more context on what is being referred to, rather than incorporating artificial intelligence into the Internet, Herman said of the semantic Web. "It is basically adding metadata to various resources on the Web in an intelligent manner so it can be used by all kinds of programs and [software] agents," added Herman.
    Click Here to View Full Article
    (Access to this site is free; however, first-time visitors must register.)

  • "When Shots Ring Out, a Listening Device Acts as Witness"
    New York Times (12/16/04) P. E9; Farivar, Cyrus

    Dr. Theodore Berger, director of the Center for Neural Engineering at the University of Southern California, has developed the Smart Sensor Enabled Threat Recognition and Identification (Sentri) system, which combines video cameras, microphones, computers, and software modeled after neural sound processing to identify gunshots, pinpoint their location, and relay the coordinates to a command center. Sentri is produced by Safety Dynamics, which Berger co-founded with Bryan Baker, who serves as the company's CEO and chief scientist. The system's software uses wavelet analysis to split sound into small fragments and match each fragment to established audio-wave patterns, while also analyzing the incoming noise in total to ensure that its individual elements are heard in the correct order even if one is masked by other sounds. "The gunshot has a particular unique signature, where you have the sound of the explosion when the firing pin hits the bullet and the sound of the bullet as it expels from the gun," notes Baker. "You've got a little blip, and then a drop, and then a big blip, and with the training we've done now, we've picked it up so that both blips are part of what it has learned." Four microphones are employed in the system, and the difference in time the sound takes to reach each mike allows the system to quickly ascertain a gunshot's location; a computer then signals the video camera to zoom in on the location, and then this information and the video is piped directly to the command center. Berger says the military has considered his technology to monitor for noises indicative of security breaches, while the Los Angeles County Sheriff's Department plans to test Sentri devices around Los Angeles County. Berger envisions an upgraded Sentri system that can identify different sounds simultaneously as well as track shooters visually.
    Click Here to View Full Article
    (Articles published within 7 days can be accessed free of charge on this site. After 7 days, a pay-per-article option is available. First-time visitors will need to register.)

  • "Race for the Ultimate Car Hacks"
    Technology Review (12/16/04); Delio, Michelle

    The increasing complexity of onboard electronic systems is generating a new breed of car enthusiasts who tweak factory-set controls and add computer modifications. The aftermarket industry generates $25 million in annual revenue selling third-party computer chips and other upgraded components, but serious hackers are not just ratcheting up horsepower or making other simple changes. "Car makers definitely make their share of stupid or annoying user-interface decisions, such as requiring the ignition key be turned to engine-run position before the power windows will work," says a car hacker known as Hobbit who designed a simple switch to operate windows without the key. Hackers also modify or change system configurations because there is no other way to get a certain functionality, says Damien Stolarz, CEO of in-car computer firm Carbot; conventional mechanic shops do not add videoconferencing or voice-activated instant-messaging capabilities, for instance. The situation is especially limited for owners of U.S. cars since manufacturers in America are reticent about installing features that could distract drivers and lead to lawsuits. European car makers offer factory-configured add-ons such as navigation packages that provide directions on a dashboard LCD screen, and those too can be modified with TV tuners or ports that allow people to plug in video game systems. Other potential modifications are even more impressive, such as a self-diagnosing car fitted with sensors and a GPS unit so that drivers can track performance at different locations; this data could be used to optimize performance in accordance with a person's driving habits. Even experienced car hackers advise caution when making changes, however, because modifications can void warranties or violate state regulations.
    Click Here to View Full Article

  • "Beyond the World Wide Web"
    Intelligent Enterprise (12/04/04) Vol. 7, No. 18, P. 44; Hudson, Michael J.

    The full impact of the Semantic Web will be similar to that of the original World Wide Web: Difficult to explain until it is seen and experienced, writes software architect Michael J. Hudson. The World Wide Web Consortium (W3C) has been working on the Semantic Web since 1998 and so far has produced the Resource Description Framework (RDF) and Web Ontology Language (OWL) to make it work. The basic idea is to standardize the creation and deployment of metadata so that Web information can be acted upon by computers without human interpretation or narrow operating guidelines. Currently, the results of the Semantic Web can be seen through offerings such as Amazon.com's product recommendations feature; under the Semantic Web, Web browsers would be able to offer similar contextual recommendations outside of the set framework of a single Web site. Users that see a vacation description currently have to manually copy down the date and contact information to their calendar and address book applications, but with the Semantic Web such information is automatically recognized as being dates and phone numbers, for example. Applications could act on Web information with little more prompting than a single mouse click. As the portion of the Internet that uses Semantic Web technologies grows, the Semantic Web will become more powerful, similar to how the Web grew in functionality and usefulness as more users were added. Not only would the Web offer an unprecedented wealth of information, but it would also create a massive set of inference rules that computers could use to make reasoned decisions. Although not technically artificial intelligence, Hudson says this functionality is a starting point for new types of applied machine intelligence.
    Click Here to View Full Article

  • "Supernova Collapse Simulated on a GPU"
    EE Times (12/13/04) No. 1351, P. 53; McCormick, Patrick

    The Los Alamos National Laboratory Advanced Computing Lab is studying techniques that could apply GPU capabilities for hardware-accelerated visualization and for general-purpose processing. Graphics processing technologies have advanced quickly, driven by entertainment-industry commodity graphics hardware. GPUs today have roughly an order of magnitude more computing power and memory bandwidth than CPUs, and Los Alamos researchers have created a development environment to help scientists leverage these capabilities when creating images from large sets of raw scientific data. Visualization plays an important role as scientific data sets become larger because the full implications of results are often difficult to discern in purely numerical studies. Los Alamos researchers used their Scout development environment to better harness a Nvidia Quadro 3400 card and improve the computational rates of a 3-GHz Intel Xeon process by 20 times without streaming SIMD extensions; the hardware-accelerated system was used to model results from a Terascale Supernova Initiative simulation. Although GPUs can increase visualization and general-purpose computation capabilities, serious obstacles remain, such as limited use of PCI Express for moving data between GPU and CPU; a lack of floating-point precision in GPUs, which seriously affects particular calculations; small memory sizes; and a restrictive programming model for GPUs, which the Scout language addresses by masking much of the problems from the end user. The Los Alamos Advanced Computing Lab is also investigating clustered GPUs in which hundreds of these processors would be working in parallel. Research on GPUs presages future CPU architectures, such as multicores, multithreading, and parallelism; GPU cores are likely to be found in future CPUs, or GPUs could be used for more general-purpose computing.
    Click Here to View Full Article

  • "Saving Stan"
    EDN Magazine (12/07/04) Vol. 49, No. 25, P. 29; Lynch, Joan

    Medical Education Technologies' (METI) Stan D. Ardman, or Stan, is a highly realistic, interactive, computer-controlled mannequin designed to simulate numerous disease symptoms and medical conditions for the benefit of doctors in training. Stan can blink its eyes, dilate its pupils, bleed, inflate and deflate its chest, vomit, emit a pulse and an audible heartbeat, and perform other convincing physiological functions with the help of three computers--a G4 Mac that serves as a control system, a Linux unit within the rack, and a simulation driver developed in-house. METI chief technology officer John J. Anton says a medical scenario for Stan starts out with mathematical equations that mirror the respiratory and cardiovascular systems, after which an equation for a drug is worked out and authenticated in medical literature; the effects of the drug on Stan's respiratory and cardiovascular systems will accurately mimic those in humans. "The equations are coded in C++ and positioned as code to be instantiated into our [Unix-based] software," Anton notes. Data for emulating about 35 unique patient profiles is preloaded by METI onto the G4, which the instructor uses to run the scenario; Stan's responses to treatment can be controlled either through the main computer or a wireless handheld. The instructor employs the Scenario Editor application to view and tweak over 70 scripted patient settings, and can then tailor and ride common medical problems on top of more critical problems. In addition, the script can be changed by the instructor on the fly so students can learn how to respond to sudden events in medical emergencies. METI patient simulators have been used by the U.S. military in Iraq and Afghanistan, and Stan was recently reprogrammed to simulate medical conditions in zero gravity as part of the training program for future astronauts.
    Click Here to View Full Article

  • "Colleges Face Rising Costs for Computer Security"
    Chronicle of Higher Education (12/17/04) Vol. 51, No. 17, P. A1; Foster, Andrea L.

    A Chronicle of Higher Education survey indicates an increase in information security spending levels over the last two years for more than 50 percent of U.S. colleges and universities that responded to the poll. Almost all respondents reported that their institutions were targeted by worms and viruses in the past year: 73 percent reported an acceleration in cyberattacks; 53 percent said attempts were made in the past year to cripple their campus networks; and 41 percent said their systems were successfully breached. Colleges are adopting a more strategic approach to computer security, examples of which include hiring information security officers (a tactic adopted by 42 percent of the survey respondents), assigning staff members to the issue full-time, educating users, and formalizing campus network security plans (which 46 percent of participating institutions have done). All of the colleges reported using antivirus software, 98 percent said they employ firewalls, and 96 percent indicated they have deployed spam filters. Colleges have a tendency to centralize the security of systems distributed across different departments while subcontracting other technology chores, and tight budgets have spurred some institutions to simply shift staff and capital over to security instead of raising IT budgets. However, the network access restrictions colleges are implementing, such as ejecting network users with vulnerable systems, can be harsh for scholars who rely on collaboration and fast access to data. One of the factors driving the hiring of information security officers is federal legislation such as the Gramm-Leach-Bliley Act, which requires the appointment of such personnel by colleges and other institutions in order to ensure compliance. The survey registers more active campus information security initiatives among fully-fledged universities than among two- and four-year colleges, which computer experts attribute to the greater threat universities face if sensitive data is compromised.

  • "They've Got Your Number..."
    Wired (12/04) Vol. 12, No. 12, P. 92; Newitz, Annalee

    Bluetooth-enabled mobile phones and Voice over IP (VoIP) are vulnerable to hacking, which has earned its practitioners the moniker "phreakers" (phone hackers). Despite demonstrations of phreaking techniques that would allow phones and the sensitive data they carry to be compromised, manufacturers and phone service providers are dismissing them as minor threats and continuing to sell products whose vulnerability is well established. Bunker CSO Adam Laurie notes that it is relatively easy for a phreaker to sniff out Bluetooth signals from nearby phones and quickly determine their level of vulnerability with a "bluesnarfing" program, after which the phreaker can infect and hijack target phones, which in turn broadcast the virus to other susceptible Bluetooth phones within their transmission range; if the infected phones are used to make micropayments, the malware could, for instance, commandeer the phones' SMS system and employ reverse SMS to steal money. Another technique, bluebugging, can be used to take over Bluetooth-equipped mobile phones and have them call other phones, allowing the users to listen in on sensitive conversations. Cell phone companies contend that such hacks are limited because Bluetooth only works over short distances, so hackers must be close to their targets to be effective; but at least one piece of equipment, the Bluetooth "sniper," circumvents this limitation by sending and receiving signals at more than 1,000 yards. Bluebugging and bluesnarfing take advantage of Bluetooth users' tendency to leave Bluetooth on all the time. Internet telephony, meanwhile, is designed for hacking, making it relatively easy for hackers to spoof caller ID for the purpose of identity theft. Another method has allowed a phreaker community to set up a free VoIP service atop an open source private branch exchange system, which one phreaker says gives people the luxury of running "chemistry experiments" on the phone system.
    Click Here to View Full Article

  • "Patent Prescription"
    IEEE Spectrum (12/04) Vol. 41, No. 12, P. 38; Jaffe, Adam B.; Lerner, Josh

    Brandeis University professor Adam Jaffe and Harvard Business School professor Josh Lerner write that Congress' 1982 establishment of the U.S. Court of Appeals for the Federal Circuit and its 1992 mandate that the U.S. Patent and Trademark Office (PTO) charge application and maintenance fees to fund itself have made it simpler to acquire patents, enforce patents against others, and gain major financial payments from such enforcement while simultaneously increasing alleged patent infringers' difficulty in challenging such claims. Observers such as the National Academy of Sciences and the Federal Trade Commission contend that the costs for obtaining, promoting, and defending patents are rising, while the increasing volume of patent applications threatens to inundate examiners and raise the risk of sloppy reviews, massive backlogs, or both. Patent experts concur that the patent system must be reformed so as to improve patent quality, lower uncertainty surrounding the innovation process, and corral the costs of patent application, maintenance, and litigation. Jaffe and Lerner present three proposals to reform the patent system: The creation of incentives and opportunities for parties to challenge the newness and nonobviousness of an invention prior to the PTO's approval of patentability; the institution of a multiple-level application review process that avoids spending money on detailed analysis of trivial patents while ensuring the rigorous examination of high-stakes patents; and the replacement of juries in cases where claims of patent invalidity are based on the existence of prior art with judges who could consult with outside experts or "special masters." The first two proposals would ensure that most patents, which rarely become significant, receive only a cursory examination, while the most potentially important patents would be reviewed thoroughly. The third proposal addresses the fact that mistakes are inevitable in even the most optimal PTO frameworks.
    Click Here to View Full Article


 
    [ Archives ]  [ Home ]

 
HOME || ABOUT ACM || MEMBERSHIP || PUBLICATIONS || SPECIAL INTEREST GROUPS (SIGs) || EDUCATION || EVENTS & CONFERENCES || AWARDS || CHAPTERS || COMPUTING & PUBLIC POLICY || PRESSROOM