Read the TechNews Online at: http://technews.acm.org
ACM TechNews
February 6, 2006

MemberNet
The ACM Professional Development Centre
Unsubscribe

Welcome to the February 6, 2006 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Sponsored by Information, Inc.

http://www.infoinc.com/sponsorthenews/contactus.html


HEADLINES AT A GLANCE:

 

'Exotic' Programming Tools Go Mainstream
eWeek (02/06/06) Coffee, Peter

New releases of such programming tools as LISP, PROLOG, and others have brought what were previously considered exotic applications out of obscurity and closer to mainstream Web-facing technologies. A recent test of Franz's Allegro Common LISP 8.0 far exceeded the performance speeds of previous versions, with its source editor, debugger, and other coding devices rivaling the most advanced Java applications. With its rigidly consistent syntax and incremental compilation, Allegro CL offers regular-expression parsing that is Perl-compatible, database interface drivers, and XML parsing. AllegroCache is the gem of version 8.0, however, offering freestanding and client/server transactional database applications. Developers are also harnessing the practical capabilities of neural nets, PROLOG, and genetic algorithms, the previous versions of which had been the untenable province of artificial intelligence hype. Aimed at creating extensible and adaptive frameworks, these applications are rapidly compiling imperfect solutions that are nevertheless of practical use in today's environment. Researchers are currently using PROLOG for speech-recognition applications, such as the implementation of its SICStus Prolog in the Clarissa speech-recognition program that helps facilitate communication among crew members of the International Space Station. The Regulus spoken-dialog processor, which includes SICStus Prolog, brings the swift application of statistics to speech recognition, says Manny Rayner of NASA's Ames Research Center. "You can develop a command grammar fairly quickly, without having to collect a huge amount of data," he said. Science Applications International's Larry Deschaine has used the technology to glean meaning from data sets, rather than the spoken word, running code on a Web page in milliseconds that would have taken weeks on a remote server.
Click Here to View Full Article
to the top


Torvalds Says DRM Isn't Necessarily Bad
CNet (02/03/06) Shankland, Stephen

Linus Torvalds recently issued a posting to the Linux kernel mailing list contending that the restrictions on digital rights management (DRM) proposed in the update to the GPL threaten to compromise security. "Digital signatures and cryptography aren't just 'bad DRM.' They very much are 'good security' too," Torvalds wrote. The Free Software Foundation has explicitly repudiated the use of DRM in tandem with GPL software in the draft update, though Torvalds maintains that DRM is useful for signing software with secret keys, or for enabling computers to run only versions of software that are demonstrably authorized. Torvalds has already announced that Linux will continue to operate under the existing version of the GPL, in a move that is seen as a slight to the Free Software Foundation. Although the DRM provision is intended to stop the practice of companies such as TiVo implementing only authorized versions of Linux, Torvalds believes that the market should dictate the behavior of hardware companies, rather than software developers, noting that if programmers object to the proprietary provisions of a hardware company, they can shop elsewhere. Torvalds claims the proposed GPLv3 oversteps its bounds in the name of a crusading ideology, whereas GPLv2 simply offered a level playing field where all source code is equally accessible. As far as the content of movies is concerned, Torvalds suggests that people use an open license from an organization such as Creative Commons, which would eventually render DRM encryption obsolete if enough content was licensed in that fashion.
Click Here to View Full Article
to the top


Swedes Go High-Tech to Crack Stradivari Code
Washington Post (02/06/06) P. A6; Gugliotta, Guy

For the almost 270 years since the death of the great violin-maker Antonio Stradivari, craftsmen have toiled endlessly and without success to replicate the essence of his unique creations, of which roughly 650 survive. A team of Swedish scientists has jumped into the effort to duplicate the Stradivarius, proposing to create a computer model of the violin and adjust it until the timbre is perfect match, rather than trying to replicate the original part by part. While measuring the geometry, vibration, and frequency of the violin is relatively easy, according to Mid Sweden University's Mats Tinnsten, it is more difficult to duplicate the properties of wood, which are inherently unique. Shaving and sculpting the wood is an intricate process, one which Stradivari conducted by ear, and which Tinnsten and his team have been using a computer to perfect. "Violin-makers reduce the thickness of the wood with a knife, and do it in different places until they are satisfied," said Tinnsten. "We use the same method, but in the computer. We take an electronic blank and carve it." The Swedish team's approach is to craft two tops, using the first as a test to calibrate the parameters, which are then loaded back into the computer to produce a schematic for a second top. Tinnsten has yet to try out his two-top method, though his research met with a warm response when presented at the International Congress on Sound and Vibration, and he hopes to apply the technique to duplicating a real Stradivarius. While violins that outplay Stradivari's creations exist, he was an acknowledged master of consistency, producing world-class violins through a prescribed formula that no one has been able to define. Some have suggested that the wood that he used was the product of unique climatic conditions, while others point to the wood's absence of organic materials, which could mean that replicating the original would require a special treating process, confounding the efforts of Tinnsten's computer simulations.
Click Here to View Full Article
to the top


Increasingly, Internet's Data Trail Leads to Court
New York Times (02/04/06) P. A1; Hansell, Saul

The Justice Department's recent request to four major Internet companies--America Online, Yahoo!, Microsoft, and Google--for data about their users' search queries has drawn attention to the issue of Internet privacy. Although America Online, Yahoo!, and Microsoft have complied with the request, Google has refused it. The case does not involve information that can be linked to individuals, but it has cast new light on what privacy, if any, Internet users can expect for the data trail they leave online. In many cases, the answer is clouded by ambiguities in the law that governs electronic communications such as telephone calls and email. Under the 1996 Electronic Communications Privacy Act, a court order is generally required for investigators to read email, although the law is inconsistent on this, treating unopened items differently from opened ones. However, the law is unclear about what standard is required to force Internet companies to turn over search information to criminal investigators or civil litigants. "The big story is the privacy law that protects your email does not protect your Google search terms," said Orin Kerr, a professor at the George Washington University Law School and a former lawyer in the computer crime section of the Justice Department. Other lawyers contend that the law that provides protection for email content, or even the Fourth Amendment protection against unreasonable searches, could be applied to data about Web searching, although the issue has not been tested in court.
Click Here to View Full Article
to the top


Vision Through Sound
Toronto Star (02/05/06) Steed, Judy

A veteran of Xerox's storied PARC team and a recently named senior researcher on Microsoft's international research team, Bill Buxton is focusing his efforts on improving the computer's user interface in an attempt to humanize technology. Buxton came to technology through his love of music and his study of the scientific properties of sound. Buxton saw his first computer in 1969, which he describes as being as large as 10 refrigerators; the National Research Council (NRC) scientists who developed and maintained it were devoted to understanding human-computer interaction in the belief that computers would one day be significant. While the NRC system remains obscure, Buxton credits it with providing the origins of Alias Systems, Sheridan College, and the rest of Canada's leading computer animation industry. Joining the University of Toronto's computer science department with dubious qualifications, Buxton went on to raise $250,000 to fund his "Structured Sound Synthesis Project," which led to the development of one of the earliest digital synthesizers. Drawing on his experience at PARC, Buxton thinks as much about assimilating his ideas into a company's culture and transforming that culture as he does about the ideas themselves. "It's not what you know, it's how you adapt to changing circumstances," Buxton said. "A company that can't adapt is not intelligent and will eventually fail." After a stint at Alias where his research helped the company's 3-D animation software net a 2003 Oscar, Buxton was drawn to Microsoft for the interdisciplinary research environment that is the first in history to boast talent that exceeds the PARC brain trust. Buxton is now working on ways to tweak Microsoft's user interface that will be amenable to consumers.
Click Here to View Full Article
to the top


IT Employees Recapturing Power of 1990s
TechNewsWorld (02/04/06) Koprowski, Gene

After six straight quarters of hiring increases, the expanding IT economy is beginning to reclaim some of the strength it exhibited in the 1990s, putting power back in the hands of employees. Drawing on data collected from more than 1,400 CIOs, Robert Half Technology estimates a 12 percent hiring increase in the first quarter of this year, with the most precipitous growth coming in the Mountain states. The most sought after skill set remains Microsoft administration, followed by wireless network management and SQL server management. Among technology specialties, 22 percent of CIOs reported the greatest demand for networking, 13 percent identified help desk/end-user support, while 11 percent named applications development. These factors combine to create a rosy picture for job seekers, said Robert Half's Katherine Spencer Lee. "Competition among employers for the most highly skilled candidates means a more favorable employment market for job seekers," where many applicants are receiving multiple offers, prompting managers to expedite the hiring process. The survey found that 23 percent of the CIOs intend to increase their staff, while just 2 percent expect reductions. In addition, the rising costs of outsourcing are undermining its popularity, as companies are figuring out how to leverage more productivity out of their in-house employees, accompanied by the realization that, far from being the time waster that they were originally considered, online tools can actually increase productivity, as workers can conduct their personal business without having to step out of the office.
Click Here to View Full Article
to the top


The Future of Speech
PC Magazine (01/27/06) Peterson, Robyn

In a recent interview, IBM researchers David Nahamoo and Roberto Sicconi discussed their speech recognition technology, capable of understanding the subtleties of the English language, translating in real time, and producing subtitles for television broadcasts. Nahamoo notes that speech-recognition technology is still in its early stages, and that machines must be exposed to a greater variety of speech, such as different contexts, applications, and vocabularies to produce better models. He hopes that IBM's Superhuman Speech Recognition program will be able to match a human's ability transcribe conversation in five years, though an actual understanding of the meaning of language is well beyond the transcription stage. Speech recognition technology will enable cross-linguistic communication for basic conversations, such as asking directions or ordering a meal in a restaurant, far sooner than it will see application in a business setting. Speech recognition programs also have difficulty conveying inflections and conversational concepts, such as sarcasm and humor. Consumer expectations of speech recognition technology also far outpace its ability, as machines often miss words and, by Sicconi's estimation, converse on the level of a 1 or 2-year-old child. In noisy environments audio information is not always sufficient for speech recognition devices, so the researchers supplement their devices with the ability to detect visual information, such as recognizing when a speaker opens his mouth, though that capability only exists in prototypes right now. Human lip readers only have accuracy rates between 30 percent and 40 percent, so the likelihood of an all-visual speech recognition device is remote. As with many other of IBM's research initiatives, Nahamoo and Sicconi are developing the speech recognition technology with open-source standards.
Click Here to View Full Article
to the top


Making Friends and Influencing People Made Easier by Talking Semantics
IST Results (02/03/06)

The IST-funded VIKEF project is working on a Semantic Web-based architecture and software development environment to help participants at trade fairs and scientific conferences pick and choose whom to socialize with, and help organizers better synchronize their events to the latest market and research trends. "By creating a software environment in which ontologies [the meanings and relationships among terms and concepts in a domain] can be applied semi-automatically to information, searching for and obtaining the information you are looking for becomes easier and more precise," says project coordinator Ruben Riestra. "Prior to a trade fair, potential participants would be able to browse the catalog of exhibitors semantically to find people, organizations, and products that are of interest to them without having to trawl through mountains of information." The application of semantics gives the system explicit rather than implicit information, facilitating more intelligent results, Riestra explains. The VIKEF partners are concurrently pursuing the use of semantic technologies for e-learning and the augmentation of information sharing between different actors in the automotive industry.
Click Here to View Full Article
to the top


Leading Scientists Help Guide New Nationwide Networking Infrastructure
PRNewswire (02/01/06)

National LambdaRail (NLR) has formed the NLR Science Research Council (NSRC) to focus on making the consortium's resources available to a wider range of researchers. NLR chief scientist David J. Farber, a professor at Carnegie Mellon University, will chair the NSRC. Other members of the NSRC include Charlie Catlett of the Argonne National Laboratory, James Cordes of Cornell University, Kelvin Drogemeier of the University of Oklahoma, Mark Ellisman of the University of California at San Diego, Harvey Newman of the California Institute of Technology, Ed Seidel of Louisiana State University, and Larry Smarr of the California Institute for Telecommunications and Information Technology. "Providing active scientists an integral role in NLR will help ensure it remains responsive to the needs of researchers across a wide range of scientific disciplines," says Smarr, who is also a professor in the Jacobs School's Department of Computer Science and Engineering at the University of California at San Diego. The universities and private companies behind NLR are working to provide an optical, Ethernet, and IP networking infrastructure across the country. The Extensible Terascale Facility and OptIPuter projects supported by the National Science Foundation, the UltraScience project of the U.S. Department of Energy, and the Hybrid Optical Packet Infrastructure (HOPI) project of Internet2 are all using the optical networking capabilities of the NLR infrastructure.
Click Here to View Full Article
to the top


Millions Required for RFID Research
RFID Journal (02/03/06) Roberti, Mark

The RFID Academic Convocation drew 100 top-end users and academics involved in RFID last week. The participants learned about RFID collaboration opportunities around the globe, established fundamental research areas that would meet industry RFID requirements, and laid out a plan for market opportunities and technologies. "Those of us in the industry came away with a better understanding of the research being done around the world, and I think the researchers came away with a better understanding of the needs of the various industries represented at the event," says Ted Ng, director of emerging technology at McKesson. Network protocol standards, specialized tags for airplane and auto parts, applications for micro- and nano-manufacturing technologies, and new bio and material sciences development in packaging were identified as research areas that need funding. Over the next five years, more than $100 million could be needed for such research areas, according to Stephen Miles, a researcher at the MIT Auto-ID Labs and chair of the RFID Academic Convocation conference committee. An "Internet of Things" could result from such research efforts, says John Williams, director of the MIT Auto-ID Labs, host of the gathering. "The Internet of Things to make billions of physical objects visible over the Web will require a secure and scalable infrastructure that is more challenging to build than the original Internet," he says.
Click Here to View Full Article
to the top


Those Cables Behind the Television May Become Obsolete
New York Times (02/06/06) P. C2; Markoff, John

A group of IBM researchers this week is expected to report that they have used standard chip-making materials to create a high-speed wireless technology that could possibly eliminate bulky cables that now connect electronic devices in the living room. Previously, high-frequency wireless technology has relied on exotic semiconductor materials such as gallium arsenide that are expensive and hard to miniaturize. The new technology would be perfect for moving HDTV video signals around the home wirelessly in the unlicensed 60 GHz portion of the radio frequency spectrum, according to researchers. This is called the "millimeter wave band," and it is capable of carrying more data than other portions of the spectrum. The high-frequency portion of the radio spectrum usually does not penetrate walls, so it may be more acceptable to Hollywood and the cable and DSL telecommunications industry, which have been worried about the risks of piracy posed by some wireless technologies, says Envisioneering consultant Richard Doherty. IBM researchers say although millimeter wave technology would have a short range in the home, it could have significant applications as an inexpensive alternative in point-to-point communications systems that are popular as data links on corporate campuses.
Click Here to View Full Article
to the top


The Human Code
University of Texas at Austin (01/30/06) Green, Tim

Kazushige Goto of the University of Texas at Austin's Texas Advanced Computing Center is successfully boosting the speed and efficiency of supercomputers with his handwritten code, which can often outclass complex programs. "I write down the code on the paper and try to find the best way for the specific architecture," he explains. Goto's techniques optimize the way in which the supercomputers' chips carry out certain groups of calculations or math kernels: The researcher rigorously programs the chips to schedule the given calculations in the most efficient order, and then reconfigures the order in which basic linear algebra subroutines (BLAS) are performed to increase their efficiency. Goto's GotoBLAS software is usually an improvement on BLAS software provided by the companies that design and build the supercomputers, and he builds his software into a mobile BLAS library that scientific programmers can employ to speed up their applications. Researchers can use GotoBLAS without altering their own applications and lower the overall execution cycle. Goto's code is used to benchmark the performance of four of the world's 11 fastest supercomputers. Linpack performance can be boosted by several percent with Goto's BLAS. Performance in certain scientific applications can be raised by up to 50 percent.
Click Here to View Full Article
to the top


When Music and Technology Merge
The Ring--University of Victoria (02/06) Lironi, Maria

The University of Victoria sees its relatively new joint program in computer science and music as an opportunity to draw students to technology who might have some fears about pursuing computer-intensive studies. Launched in September 2004, the program is only the second of its kind in Canada. The combined major introduces students to the fundamentals of computer science and music, but eliminates private lessons in voice or an instrument. Students take courses in music, science, computers, recording techniques, acoustics of music, audio signal processing, and music information retrieval, and also take a computer music seminar. "Today, pretty much the whole process of recording, distributing, and producing music is done through computers," says Dr. George Tzanetakis, a computer science professor who teaches music information retrieval, which involves analyzing music collections in digital format. Computer technology is also used to present live music performances. UVic also offers electrical engineering students a computer music option, and both programs have attracted 43 students, including Ben Rancourt, a computer science major who switched to the joint program, even though he will spend two more years working towards his degree. "But if you have a very strong interest in technology and music, this is definitely worth taking," says Rancourt.
Click Here to View Full Article
to the top


College Receives Training Grant for Cybersecurity
Maryland Gazette (02/01/06) Sedam, Sean R.

Montgomery College is set to form a partnership with other area community colleges, universities, high schools, and the Metropolitan Washington Council of Governments to develop and operate a regional cybersecurity center called the CyberWATCH (Cybersecurity: Washington Area Technician and Consortium Headquarters) project. The project will be funded by a $3 million grant from the National Science Foundation over a period of four years. "There is a demand in this area for skilled cybersecurity technicians who can protect our nation�s information against intrusion," says CyberWATCH director David Hall. One of the center's goals will be to address the shortage of cybersecurity technicians and training programs in the area. Montgomery College will build and maintain a remote information technology security lab, create a program in cybersecurity training, and develop internships for students and training and externships for faculty. The consortium, which will allow students from specific schools to log on to the lab and learn router, switch, firewall, workstation and server security, includes the University of Maryland, College Park, George Mason University, and George Washington University, among others.
Click Here to View Full Article - Web Link to Publication Homepage
to the top


Esther Dyson's Perspective
IT Manager's Journal (02/03/06) Amis, Rod

One of the more encouraging trends in IT today is that more users are rejecting the choices being offered to them and are taking control, says Esther Dyson, former interim chairperson of ICANN. Users are refusing to be positioned as only a segment to be marketed to, and are being more selective about products and services. As a result, the software industry, for example, has responded by offering more intuitive interfaces, and enterprises are providing more user-oriented features and functions such as blogs, Dyson says. On the issue of collaborating with businesses in Eastern Europe and Russia, Dyson, who remains active in IT development around the world, says CIOs must keep in mind that they are dealing with other cultures in which people may not embrace our "in your face" style. "They are more suspicious of authority, frankly, and often don't have the habit of doing or starting things on their own," says Dyson, currently editor of Release 1.0 and organizer of the PC Forum conference. She says open source software has not caught on in the region, given the "price," adding that the open source community will need to offer training similar to the initiatives set up by companies such as Microsoft and Oracle locally. Dyson's focus for the future is to aid the industry in the development of a more friendly, human, and manageable infrastructure.
Click Here to View Full Article
to the top


Morphing the Mainframe
Computerworld (01/30/06) P. 29; Mitchell, Robert L.

The mainframe's future may be one of absorption into the distributed computing sector or its emergence as a distinct platform, and critical to the latter outcome is big iron's successful incorporation of technologies such as Fibre Channel, Unix, InfiniBand, and Java. The mainframe still dominates complex environment applications, but distributed Unix- and Windows-based systems are chipping away at the mainframe installed base's low end, while the war for the midrange application domain is raging. Mainframe hardware and software costs must become more competitive and more agile software architectures must be successfully implemented on mainframe systems at scale if big iron is to survive, and this requires the adoption of industry-standard technologies. Unisys general manager Chander Khanna says the mainframe's hardware platform is losing relevance. "It's more of what's in the operating environment and what's in the middleware," he notes. The mergence of mainframes and additional open architectures entails a heavy reliance on virtualization technology, according to 451 Group analyst John Abbott. Proprietary mainframe operating systems are advantageous in that they provide a trustworthy key management platform, along with "efficiency, isolation, the address spaces, the encryption, and...an efficient clustering model," reports IBM fellow Guru Rao. He adds that the ultimate fate of more than four decades' and over $1 trillion worth of legacy mainframe code is the most formidable challenge.
Click Here to View Full Article - Web Link May Require Free Registration
to the top


Automated Capture of Thumbnails and Thumbshots for Use by Metadata Aggregation Services
D-Lib Magazine (01/06) Vol. 12, No. 1,Foulonneau, Muriel; Habing, Thomas G.; Cole, Timothy W.

A project at the University of Illinois at Urbana-Champaign (UIUC) seeks to make heterogeneous resources available on the UIUC CIC metadata portal more understandable through the embedding of thumbnails and thumbshots of image and Web page resources in the context of the Open Archives Initiative (OAI) Protocol for Metadata Harvesting. UIUC supplements thumbnails provided by partner data suppliers with a process that automatically generates thumbnails and thumbshots from the Web pages resources referred to by the metadata records. The CIC metadata portal collects metadata describing over 500,000 primarily digital resources from 11 Midwestern universities, and renders the metadata searchable; each participating university has deployed at least one OAI data provider, and links or references to thumbnails were originally absent from the metadata. UIUC deployed distributed thumbnail and remotely captured thumbnail processes with Thumbgrabber, an open-source application developed for thumbnail/thumbshot aggregation and maintenance. Thumbgrabber can generate thumbnails based on the largest image found, the portion of the Web page that can be displayed in a window of the size specified, or the first image or Web page portion that fulfills the dimensional requirements. The challenge is to make Thumbgrabber accommodate the various technologies and potential instabilities of the Web environment in order to dependently produce consistent image surrogates from URLs for both images and Web pages as pointed to by aggregated metadata records. The thumbnails exist as an external document held in a distinct location, and the need to synchronize information in metadata records, the resources they refer to, and the thumbnails/thumbshots presents the conundrum of the best way to interconnect these elements and asynchronously support updates. Collaboration between data and service providers is critical to the deployment of thumbnails and thumbshots in the context of metadata harvesting.
Click Here to View Full Article
to the top


Make Your Development Process More Transparent
Software Test & Performance (01/06) Vol. 3, No. 1, P. 26; Ragan, Tracy

Everyone from developers to quality assurance managers to business managers can understand the software development process irrespective of their IT expertise through the Eclipse Foundation's Application Lifecycle Framework (ALF) project, writes Catalyst Systems CEO Tracy Ragan. The project will facilitate communications between tools that support the application life cycle process via SOAP transactions through a unified communications framework, allowing tools to exchange information that testers ought to obtain in anticipation of release delivery, such as the requirements initiating the new release of the software; the features added to the release and the person who requested them; the person who approved the release to production; and the impact of the changes to the application in general. A common vocabulary between application life cycle tools, as well as "service flows," will be defined by the Eclipse ALF project, instilling a degree of transparency in the software development process. The end result will be tools that can share critical data no matter who their vendor is, according to Ragan. No single team will employ all of the project's tools independently; all teams will use the tools to share information. The Eclipse ALF project will help companies comply with new IT governance regulations. "As application life cycle tools become more integrated with ALF, you will be able to gather information that will help you determine the quality of the software even before you execute the first test case," Ragan concludes.
Click Here to View Full Article
to the top


To submit feedback about ACM TechNews, contact: technews@hq.acm.org

To unsubscribe from the ACM TechNews Early Alert Service: Please send a separate email to listserv@listserv.acm.org with the line

signoff technews

in the body of your message.

Please note that replying directly to this message does not automatically unsubscribe you from the TechNews list.

ACM may have a different email address on file for you, so if you're unable to "unsubscribe" yourself, please direct your request to: technews-request@ acm.org

We will remove your name from the TechNews list on your behalf.

For help with technical problems, including problems with leaving the list, please write to: technews-request@acm.org

to the top

© 2006 Information, Inc.


© 2006 ACM, Inc. All rights reserved. ACM Privacy Policy.

About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2014, ACM, Inc.