Volume 8, Issue 887: January 9, 2006
- "Which Is the Most Innovative Country in Europe?"
IDG News Service (01/06/06); Sayer, Peter
In a recent survey, Eurostat used several methods to track European leadership in IT, with Malta coming in first in the portion of export revenue from technology, while Germany garnered the most patents and Scandinavian countries boasted the most sophisticated telecommunications networks. Italy is home to the most high-tech manufacturers with 30,651, though Eurostat includes makers of motor vehicles, bulk chemicals, and machinery in that category. While it can only claim half of Italy's total of manufacturing companies, Germany boasts larger firms with generally superior production. Germany also has a larger percentage of manufacturing companies devoted to technology than most European nations, followed closely by France and Britain. The decline of the heavy manufacturing industry has fueled British interest in high-technology, though the nation is more prone to services than manufacturing. European businesses sponsored 54 percent of the continent's research and development spending, compared to 63 percent in the United States and 75 percent in Japan. The European Commission is hoping that European spending on research and development will reach 3 percent of GDP by 2010, with industry financing a full two-thirds. Scandinavian countries led the way in proportionate research and development spending in 2004, with Sweden devoting 3.74 percent of its GDP to R&D, followed by Finland at 3.51 percent, and Denmark at 2.63 percent. The number of patents awarded may be a more accurate way of measuring innovation than spending on research. From 2000 to 2004, the European Patent Office awarded Germany the most patents with 714, while France was a distant second with 466, though the United States and Japan both garnered more than 1,000 over the same period.
Click Here to View Full Article
- "U.S. Engineers Hold Their Own"
Philadelphia Inquirer (01/08/06) P. C5; Johnson, Kristina M.
A recent study at Duke University has found that alarmist comparisons between the 70,000 engineering graduates produced annually in the United States and the 350,000 graduates in India and the 644,000 graduates in China are misleading, writes Kristina M. Johnson, dean of the Pratt School of Engineering at Duke University. Duke executive-in-residence Vivek Wadhwa and sociology professor Gary Gereffi led the study, which scrutinized the matrices of each figure, finding that Indian graduates who only completed two- or three-year programs of study were included. When using equivalent criteria, the Duke team found that the United States graduates roughly the same number engineers as India, with only a third of the population. The researchers also found that the figure for China was skewed, though they were unable to pin down exact numbers. A basic technical education is not enough to create a superior engineer in the modern job market, notes Cisco's John Chambers, adding that candidates must possess multidisciplinary knowledge and communication skills to excel in today's workforce. The figure of 70,000 U.S. graduates includes only those who have completed four-year, integrated programs based in calculus. Math and science education in secondary school is still insufficient, as the ranking of U.S. students against their international peers in that area drops from the top five in fourth grade to 21st by the time they graduate. Not requiring math for each of the four years of high school is a major contributing factor to the looming shortage of engineers, which, while not yet a crisis, threatens to undercut the United States' global leadership in development and innovation if unaddressed, Johnson says.
Click Here to View Full Article
- "Debate Looms for GPL 3 Draft"
eWeek (01/08/06); Galli, Peter
Richard Stallman and Eben Moglen will unveil the first draft of the third version of the GNU General Public License (GPL) next week at MIT, the first update in 15 years. The new draft of the GPL, which governs Linux and many other open-source initiatives, is expected to address emerging issues such as the licensing of intellectual property, managing software transmitted over a network, and trusted computing. Copyleft, the method for making all versions of a program free, is expected to be one of the many controversial provisions in the draft of the new version. "I do not believe that we will reach a consensus on this front, so I believe the license will have to accommodate options as to the question of Web services, but this must be squared with the ideological pursuit of freedom," said Moglen. He also said that the next version of the Lesser GPL will be released shortly after GPL 3. The developers want to have the new GPL license in place by the beginning of next year, at roughly the same time as the first shipment of Windows Vista.
Click Here to View Full Article
- "Electronic Eye Grows Wider in Britain"
Washington Post (01/07/06) P. A1; Jordan, Mary
Britain has expanded its cutting edge surveillance system with the development of a national database compiling data from an ever-growing number of traffic cameras that can determine whether a car has been stolen, near a crime scene, or if it is uninsured. The technology, already employed in the M25, the road that circles London, will use thousands of cameras set up on poles or in traveling police vans to effectively eliminate the possibility of a criminal traveling on the road without being recorded. While British citizens have thus far been accepting of the nation's extensive video monitoring initiatives, privacy advocates are protesting the measure as a giant leap toward the Big Brother dystopia. The cameras are supported by computers capable of tracking 3,600 license plates per hour. Recent use of police van cameras alerted officers to a 1998 Volvo with no insurance that was found to contain $180,000 worth of heroin, while another vehicle that was found to be in close proximity to a crime scene had an abducted child in the trunk. Utilizing more than 4 million closed-circuit cameras and giving rise to a $1 billion-a-year industry, Britain's surveillance system is the most expansive in the world, with a typical Londoner being monitored roughly 300 times per day. The current system can store traffic photos for up to two years, though soon it may able to store images for five years.
Click Here to View Full Article
- "Using Color Codes to Browse the Web"
Newsweek International (01/16/06); Caryl, Christian
The bar code has inspired researchers to devise a color version that could ultimately be used to wisk people away to a Web site for more information on companies and their products and services. Several years ago, Korean computer scientists developed color code that could be used in company logos or other graphic designs. And now a number of companies are working to provide a way for consumers to gain immediate access to Web-based information by way of a color code. Colorzip Media in South Korea has had its technology used to enable television viewers to vote on the outcome of a show. Scanners are able to read color more easily, and read code from longer distances and at poor resolutions. Moreover, the Colorzip is designed to provide an index for content that is stored on a server, compared to bar codes that hold large amounts of data. Colorzip plans to give mobile phone users the ability to point a camera lens to anything that has a color code, such as a print ad in a newspaper, and receive the latest stock prices and news on the company on their display. There are still some concern whether the technology is suitable for financial transactions because the ability to copy or photograph the codes makes them vulnerable to hacking.
Click Here to View Full Article
- "The Translator's Blues"
Slate (01/09/06); Browner, Jesse
Machine translation has grown into an $8 billion industry, and the federal government even relies on the technology to defend the country. However, context remains the major issue with computer translation software, in that the technology would need to have a knowledge base similar to that of the human brain in order understand, for example, that "the con is in the pen" refers to a criminal that is in jail. In a test of eight translation programs, ranging from basic free versions to expensive professional software, only one program, which is used by the Department of Defense and law enforcement, was able to correctly translate a simple sentence from the Vatican Web site. Today, available machine translation technology falls into three categories: basic machine translations programs, memory-based systems that use a broad database of sentences and phrases already known, and statistical/cryptographic system that examine other texts to calculate probabilities and then perform translation based on the similarities. Professional, human translators do not consider statistical/cryptographic systems to be as accurate as they are billed to be. Although the technology is not meant to be a replacement for human translators, Mike Collins of MIT believes "machine learning" could take the technology to the next level. Experts say the holy grail of machine translation is fully automatic, high-quality translation (FAHQT), but right now users must accept what's known as "gisting"--about an 80 percent accuracy rate that gets the generally meaning of the text.
Click Here to View Full Article
- "Government Web Sites Follow Visitors' Movements"
CNet (01/05/06); McCullagh, Declan; Broache, Anne
The practice of tracking visitors to U.S. government Web sites has come under scrutiny recently when it was learned that the Pentagon, Air Force, and Treasury Department are still using Web bugs or cookies to monitor visitors, despite federal laws that restrict the act. More recently the National Security Agency was caught using permanent cookies to monitor visitors to its site, and then stopped when it was questioned by the Associated Press. "It's evidence that privacy is not being taken seriously," says Peter Swire, a law professor at Ohio State University. "The guidance is very clear." A 2003 government directive clearly states, "agencies are prohibited from using" Web bugs or cookies to track Web visitors; however, not all monitoring of Web visitors is banned. There is an exception for federal agencies that have a "compelling need," disclose the tracking, and obtain approval from the agency head. The directive does not pertain to state government Web sites, court Web sties, or sites developed by congressional members. Government agencies' failure to adhere to the rules is nothing new: Back in 2001, the Defense Department's Inspector General conducted a review of the agency's 400 Web sites and uncovered 128 sites with "persistent" cookies. Under the Clinton administration in June 1999, the White House Office of Management and Budget released guidelines for federal Web sites, but it has since been revised in the 2003 directive, which keeps the restriction on permanent cookies, but allows temporary ones that can last as long as the browser window stays open.
Click Here to View Full Article
- "Over the Holidays 50 Years Ago, Two Scientists Hatched Artificial Intelligence"
Pittsburgh Post-Gazette (01/02/06); Spice, Byron
Carnegie Tech scientists Herbert Simon and Allen Newell invented the first artificial intelligence device over the Christmas holiday 50 years ago. Eight months later, they ran the program Logic Theorist on Rand's JOHNNIAC computer, revolutionizing the way people thought about computers' functionality. "For Newell and Simon to make this giant room of vacuum tubes be thought of as something with intelligence was a pretty big leap," said Randal Bryant, dean of Carnegie Mellon's School of Computer Science. While much of their original research has since been eclipsed, Bryant maintains that their work paved the way for the current technologies of machine translation, speech recognition, and robotics. Newell brought a mathematical background to computer development, having worked on simulations of radar maps prior to teaming up with Simon. That experience sparked the idea of using computers to manipulate symbols, which eventually led the pair to develop a chess program relying on heuristics, or the rules of thumb that people invoke to solve geometry problems. The notion that information and knowledge could be written into computer code as symbolic logic was one of two prevailing schools of thought in artificial intelligence, competing with formal logic. In the past decade, statistical tools have propelled artificial intelligence, ultimately leading to a computer that defeated the world's top chess player in 1997, which Simon had predicted 40 years earlier.
Click Here to View Full Article
Allen Newell and Herbert A. Simon received ACM's A.M. Turing Award in 1975.
http://www.acm.org/awards/citations/newell.html
http://www.acm.org/awards/citations/simon.html
- "Sony Snafu Brings DRM to the Fore"
SD Times (01/01/06) No. 141, P. 6; Handy, Alex
The recent discovery that millions of Sony's CDs contained anti-piracy software that exposes PCs to Trojans and other viruses has sent shockwaves through an entertainment industry that is grappling with how to balance security with digital rights management (DRM). The Sony vulnerability, the Extended Copy Protection software developed by First 4 International to prevent content from being ripped and converted into a user's iTunes library, was quickly found to collect information from users that it relayed to Sony's headquarters. The vulnerability infected half a million computers in November, causing widespread concern among businesses and Justice Department officials, and prompting a lawsuit from the state of Texas. Sun Microsystems' Glenn Edens is developing an open source DRM system, known as DReaM, that he says offers comparable security to proprietary applications. Some analysts caution entertainment companies to be forthright about their DRM systems, a belief shared by Sun, which has already publicly released two open source projects with another two on the way. Due out next spring, DReaM has already attracted the interest of many open source developers eager to contribute. Science fiction author Cory Doctorow believes that the fundamentals of cryptography make DRM inherently flawed, as every DRM system provides attackers with the ciphertext, the cipher, and the key required to descramble the content. Edens believes that while it may not be perfect, DReaM can still work, provided that it does not restrict normal use of the CD or DVD it is designed to protect.
Click Here to View Full Article
- "Better Than People"
Economist (01/06/06) Vol. 377, No. 8458, P. 58
Japanese scientists continue to pursue research on human interfaces in an effort to make robots that interact more easily with people or have human features. At a world exposition this summer in Aichi prefecture, a nanny robot was on display, and Sony and Honda continue to make progress on their robots, QRIO and ASIMO, respectively. For Japan, the "service robot" will help the nation address its population peak, and will allow young people to focus more on filling science, business, and other creative or knowledge-intensive jobs. The market for such robots could grow to $10 billion in a decade, according to a government report released in May. The Japanese have embraced the idea of having robots care for the sick, pick up trash, guard homes and offices, and provide direction on the street, and leaders believe rolling out humanoid robots makes more sense than bringing in foreign workers. However, the Japanese may be more comfortable around robots because they have difficulties dealing with other people, especially strangers. They have trouble communicating with others "because they always have to think about what the other person is feeling and how what they say will affect the other person," says Karl MacDorman, a researcher at Osaka University. What is more, Japan does not appear to be concerned about the impact that humanoid robots will have on society.
Click Here to View Full Article
- "Skills That Will Matter"
InformationWeek (01/02/06) No. 1070, P. 53; McGee, Marianne Kolbasuk
A new report from the Society for Information Management is another indication of how important business and industry knowledge, and communications and negotiating skills, have become for IT workers. Of the 100 IT managers who identified the most important talents to keep in-house through 2008, more than 77 percent of respondents cited project planning, budgeting, and scheduling; followed by 75 percent who listed functional-area process knowledge; and 71 percent who noted company-specific knowledge. Three technical skills topped 60 percent in systems analysis at 70 percent, systems design at 67 percent, and IT architecture and standards at 61 percent. And 55 percent of IT execs mentioned security skills. "The higher you go in the IT organization, the more you need to know about business," says Stephen Pickett, new SIM president and CIO of transportation company Penske, as firms rely more on off-the-shelf software and outsource IT. Pickett describes IT as an umbrella that allows someone with IT skills to see more of a company. More employers are starting to demand business-technology professionals who have "customer-facing, client-facing" skills and understanding, the survey also reveals.
Click Here to View Full Article
- "Standards on the Way for Encrypting Data on Tape, Disk"
Network World (12/15/05); Connor, Deni
The IEEE has come up with new standards for shielding data on disk or tape that may be used in products as soon as this year, according to industry experts who say the standard technology could be significant in defending organizations and their clients from the dangers of lost or stolen disks and tapes. The proposed standards for data encryption on disk and tape are the IEEE P1619 and P1619.1 Standard Architecture for Encrypted Shared Storage Media. "For businesses in regulated industries or that store personal financial information, encryption may very well be a requirement," says Forrester Research's Stephanie Balaouras. "For other businesses it's a matter of managing risk, and encryption is one of many options that business must consider." The proposed standards would deal with encrypting data sitting idle on disk or tape, in comparison to protocols such as IPSec, Secure Sockets Layer (SSL), and Secure Shell (SSH), which are used to encrypt data as it moves. The IEEE's Security in Storage Working Group is refining the standards, and expects them to be approved in upcoming months. The standards are being designed to define three algorithms and a key management technique created to guarantee the compatibility and interoperability of different storage formats in different types of storage gear. The specification suggests using the new Liskov, Rivest, Wagner-Advanced Encryption Standard (LRW-AES) cryptographic algorithm for disk encryption, and proposes the National Institutes of Standards and Technologies' (NIST) AES Galois/Counter Mode (AES-GCM) and AES Counter with CBC-MAC Mode (AES-CCM) standards for tape encryption. Cisco, HP, IBM, McData, and the U.S. Army are just a few of the members of the standards working group.
Click Here to View Full Article
- "Where's the Real Bottleneck in Scientific Computing?"
American Scientist (01/06) Vol. 94, No. 1, P. 5; Wilson, Gregory V.
While scientific computing has the ability to solve a broad range of complex problems, many scientists have never been taught how to program efficiently, and as such labor under a "computational illiteracy." Ignorance of fundamental tools such as a version control system that tracks the changes made to any file so that different versions can be compared is a critical impediment to unlocking the potential of today's advanced processing power. Many scientists have also become skeptical of the revolving door of programming trends, and have come to think of computer science as fickle and given to fads that offer little practical benefit. Many scientists are also reluctant to wade into new languages because of a lack of uniformity in such elements as the regular expressions that retrieve data within text files. Limitations on scientific research are increasingly becoming a function of how quickly scientists can translate their ideas into working code, however. Once they get over their initial skepticism, most scientists readily embrace techniques such as version control, systematic testing, and the automation of repetitive tasks for the time that they save if applied correctly. To institute this mentality throughout the scientific community, journals should subject computational science to the same rigid standards that apply to work performed in the laboratory. At the same time, there should also be more journals devoted to publishing articles about the development and functionality of scientific software so that the community can capitalize on faster chips and more elaborate algorithms.
Click Here to View Full Article
- "How Click Fraud Could Swallow the Internet"
Wired (01/06) Vol. 14, No. 1, P. 138; Mann, Charles C.
The lucrative nature of pay-per-click (PPC) advertising, in which search engines charge a fee from advertisers every time a surfer clicks on their links, has spawned an insidious form of exploitation called click fraud, in which scammers click the ad sent by the search engine to their own Web pages. Indeed, observers such as search marketing consultant Joseph Holcomb believe click fraud threatens the entire PPC industry. To shore up customer confidence in the wake of click fraud's growing sophistication, search engines may have to add more transparency. Click farms run by unsavory companies in India employ people who click on ads 24/7, while companies targeting rivals may practice "impression fraud," in which they repeatedly reload a search engine page hosting a competitor's ad so that the ad might be eliminated. Available for download on the Internet is click fraud software that can supposedly mask clicks' point of origin, while scammers are setting up bogus blogs that automatically produce content by incessantly duplicating bits from other Web sites, inserting popular keywords, and then signing up the hodgepodge as a search engine affiliate. Owners of zombie networks have also recently started to practice click fraud by programming hijacked machines to randomly create clicks from all over the world, which complicates identification. Symantec security architect Elias Levy notes that the damage to the PPC industry would be far worse if scammers targeted the entire PPC system rather than individual advertisers. Most companies that promote anti-click fraud services simply outsource the job of studying internal logs for signs of duplicity, and PPC inventor Bill Gross thinks advertisers will eventually transition from PPC to cost-per-action, a model in which advertisers are only charged when a click leads to a purchase or other specified outcome.
Click Here to View Full Article
- "Are Authors and Publishers Getting Scroogled?"
Information Today (12/05) Vol. 22, No. 11, P. 1; Kupferschmid, Keith
The controversy surrounding the Google Print Library Project culminated in two suits filed on behalf of five publishers and the Authors Guild, claiming that Google is in violation of copyright law by scanning millions of books to make them freely accessible online without permission from the authors or publishers. Other digitization projects have succeeded with considerably less acrimony, such as the broadly supported Open Content Alliance and Microsoft's partnership with the British Library, which will make 100,000 books accessible online, provided that authors and publishers have given their consent to scan copyrighted works. Other digitization projects, such as the U.S. Library of Congress Digital Preservation Program, Project Gutenberg, and Carnegie Mellon's Million Book Project, have all taken a similarly collaborative approach, whereas Google has undertaken its Print Library Project in partnership with the libraries of the University of Michigan, Harvard University, Stanford University, and Oxford University, as well as the New York City Public Library without getting the consent of publishers and authors, unlike its noncontroversial Print Publisher Program. Many details are unclear about the program, such as how much of a book will be available and what security measures will be in place. Google maintains that its scanning activities are protected by fair use, though the sheer volume and variety of the materials in question makes fair use difficult to prove. When evaluating fair use, a host of factors must be considered, including the purpose of the use, the nature of the work, how much of the work is used, and what the effect on the market for the work will be. Google also claims that it has implied permission to digitize content because content holders who do not wish for their works to be reproduced must state so explicitly. Though unlikely, if that argument were to hold up a whole host of material, such as personal letters and photographs, could become fair game for open online distribution.
Click Here to View Full Article
- "Teaming Up to Tackle Risk"
Optimize (12/05) No. 50, P. 26; Schmidt, Howard A.
The increased likelihood of a blended assault against organizations with multidimensional consequences calls for a collaborative risk-management strategy that is synchronized, coordinated, and integrated by an interdisciplinary council, according to R&H Security Consultants President and former US-CERT chief security strategist Howard Schmidt. The CIO and CISO/CSO should fulfill major roles in the council, with the goal being the cultivation of a sense of shared responsibility about security and risk among all critical decision-makers. Schmidt writes that each session of the council should yield collaborative tasks if the group is to have any chance of promoting a plan that can effectively reduce or mitigate risk, and these joint tasks must be prioritized. Governance, assessment, and crisis risk management comprise the three core areas the council should focus on. Schmidt says the governance domain needs to be both action- and consensus-oriented: The first focus holds each member responsible for contributing and setting up practical policies that uphold the organizational mission, while the second focus is designed to avoid unproductive, bureaucratic drudgery. Risk-management assessment is an unending and perpetually fluctuating job, and Schmidt recommends each critical discipline owner present a list of risk and security issues as well as how each issue impacts his performance; the accumulated list should then be discussed, examined, and consensually prioritized. In the area of operational and crisis risk management, Schmidt explains that existing operational units should be responsible for putting the council's policies into effect, which points to the necessity of having the pertinent operational executives participate in the process of policy development and risk assessment.
Click Here to View Full Article
- "Cyberinfrastructure and the Future of Collaborative Work"
Issues in Science and Technology (12/05) Vol. 22, No. 1, P. 43; Ellisman, Mark H.
New forms of cyberinfrastructure-enabled collaborative research will emerge as the cost of basic tools for cutting-edge research grows beyond the means of many institutions, as more scientific and engineering fields become data and computation intensive, and as online sharing of data and computing power becomes easier and less expensive. The momentum of collaborative research in turn fuels the development of innovative IT tools and capabilities. Harnessing these tools most productively and establishing an institutional and policy framework to support cyberinfrastructure-based research collaboration is a heady challenge, but initiatives such as the National Institutes of Health's Biomedical Informatics Research Network (BIRN) program demonstrate the potential of this approach. BIRN is building a data-sharing environment that presents biological information stored at far-flung locations as a single, harmonized database, and the effort is quickly generating tools and technologies to facilitate the collection of data from practically any lab's research program to the BIRN data system. Cyberinfrastructure-supported environments could make basic scientific research more cost-effective and less of a burden for government funding agencies; lower costs stemming from the duplication of facilities; increase interdisciplinary teams' effectiveness in tackling long-term research goals; and let researchers in smaller institutions participate in cutting-edge projects. The fulfillment of cyberinfrastructure's potential depends on solving a number of problems. A way must be found to more effectively integrate data on the spur of the moment in a visual and understandable manner; the adoption of more efficient networking architectures must be encouraged through regulatory action; and a broad support base of network and data storage investments must be constructed.
|