Timely Topics for IT Professionals
About ACM TechNews
ACM TechNews is published every week on Monday, Wednesday, and Friday.
ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either HP or ACM.
To send comments, please write to email@example.com.
Volume 5, Issue 452: Friday, January 31, 2003
- "Consortium Pushes for Cybersecurity R&D"
IDG News Service (01/30/03); Gross, Grant
The Institute for Information Infrastructure Protection (I3P), a consortium of 23 security research institutions funded by the National Institute of Standards and Technology, released a report Thursday in which it recommended that the U.S. government and private sector increase research and development in a number of key areas. Coinciding with the report's release was a kick-off event in Washington, D.C., where security experts clarified I3P's agenda. The report specifies that more R&D is needed in the general areas of secure system network response and recovery; wireless security; traceback, identification and forensics; trust among distributed autonomous parties; metrics and models; enterprise security management; discovery and analysis of security traits and flaws; and legislation, policy, and economics. I3P member Wayne Meitzler of Pacific Northwest National Laboratory told attendees at the kick-off that he wants to see more research into vulnerability scanners that can find flaws in object code and source code, and added that the consortium would welcome any software development models that boost security, open-source being one of them. Sandia National Laboratories' Bob Hutchinson said that R&D should also be channeled into solutions for wireless security problems such as distributed denial-of-service attacks. Meanwhile, Victoria Stavridou of SRI International's System Design Laboratory stressed that computers need better early-warning systems so that super-fast intrusions will have an equally rapid counter. I3P Chairman Michael Vatis and others hope the consortium's findings will lead to more congressionally authorized cybersecurity R&D funding. Vatis noted that the I3P plans to release follow-up reports covering both solutions and new problems, as well as establish a common facility where cybersecurity products can be tested.
Click Here to View Full Article
- "In Net Attacks, Defining the Right to Know"
New York Times (01/30/03) P. E1; Hafner, Katie; Biggs, John
Last weekend's Slammer worm attack and the network slowdowns its caused rekindled a number of controversial issues among security experts, most notably the responsibility of companies to publicly disclose hacker intrusions to consumers. Few security breaches are reported, while the ones that are usually involve a widespread attack that affects thousands of systems. Roman Danyliw of the Computer Emergency Response Team (CERT) Coordination Center notes that many companies are reluctant to admit their security has been compromised--both to their customers and to law enforcement officials--out of fear that it could hurt their reputations or give their rivals a strategic advantage. Another factor hindering full disclosure is that successful breaches are often the result of organizations' failure to deploy basic safeguards or patches for well-known flaws, which is what allowed the Slammer worm to cause so much mischief. Harvard researchers Michael Smith and Stuart Schechter argue in a paper they presented at a recent cryptography conference that organizations or individuals can reduce the likelihood of hacker attacks if they share information about intrusions. However, Alfred Huger of Symantec Security Response has doubts about such a theory, and points out that many attacks, even those focused on specific targets, are launched by hackers who are "trophy hunting." He cites his own company as an example, noting that Symantec is the target of between 3,000 to 4,000 hack attacks every day. Meanwhile, some security experts are pushing for federal legislation that would require institutions to report intrusions: In its draft of the National Strategy to Secure Cyberspace, the President's Critical Infrastructure Protection Board recommends that a centralized, national online system be set up where private companies and federal agencies can share information about break-ins.
(Access to this site is free; however, first-time visitors must register.)
- "Bush Proposes Antiterror Database Plan"
CNet (01/29/03); McCullagh, Declan
In the latest move by the White House to boost data-sharing between U.S. police and spy agencies, President Bush used Tuesday's State of the Union Address to announce the Terrorist Threat Integration Center (TTIC), a government database that would compile information about suspected terrorists from federal and private sources. "The TTIC will ensure that terrorist threat-related information is integrated and analyzed comprehensively across agency lines and then provided to the federal, state and local officials who need it most," declared Attorney General Ashcroft after the president's speech. "We will be able to optimize our ability to analyze information, form the most comprehensive possible threat picture and develop the plans we need to prevent terrorist attacks." However, the plan has drawn fire from critics who see parallels between it and the Total Information Awareness (TIA) project; some have posited that the announcement is an attempt to avoid the controversy engendered by the TIA. The TTIC with team up with the FBI and the Homeland Security Department, and have access to "all information" available to the government, including data compiled by the Defense Intelligence Agency and the National Security Agency (NSA). Electronic Privacy Information Center general counsel David Sobel noted that there is as yet no indication about any constraints the TTIC's data collection activities would be subject to. Center for Democracy and Technology executive director Jim Dempsey said that, essentially, the FBI, the CIA, or NSA would gather information on people under the orders of the TTIC. Meanwhile, the center could be affected by a bill to regulate "data-mining technology" proposed by Sen. Russ Feingold (D-Wis.).
- "A Big Test for Linux"
CNN/Money (01/28/03); Hellweg, Eric
SCO Group's Jan. 22 announcement that it will create a licensing division and recruit lawyer David Boies to investigate and protect its intellectual property has stirred up worry within the Linux and open-source communities. At stake are SCO-owned Unix patents, some of which have been incorporated into Linux strains, although coders may not necessarily be aware of this. "In order to get compatibility with Unix systems on Linux, people grab our [technology] and haven't realized this wasn't appropriate," says SCO's Chris Sontag. SCO, an open-source software provider whose revenue base has been eroding for the last several years, intends to charge a $149-per-CPU licensing fee to anyone who is using their intellectual property. This could slow down the development and corporate adoption of open-source products. Giga Information Group analyst Rob Enderle notes that ensuring that Linux does not infringe on copyrights is impossible, given that so many people contribute to its development. The usual strategy when copyrighted programs are discovered is for coders to reverse-engineer them and remove the original code, but this may not be enough to avert license fees or lawsuits. On the other hand, if SCO's initiative fails, then "it may validate the [Linux] platform as well," according to Enderle.
Click Here To View Full Article
- "Dispute Could Silence VoiceXML"
ZDNet (01/29/03); Festa, Paul
The VoiceXML 2.0 specification is nearly ready, as evidenced by the World Wide Web Consortium's (W3C) candidate recommendation issued on Wednesday. However, the standard's implementation could be hindered because of an intellectual property dispute. VoiceXML 1.0 was created by the VoiceXML Forum, which was chartered under a RAND policy that allowed companies that contributed technologies to the specification to retain intellectual property rights to those technologies under "reasonable and nondiscriminatory" terms. Although this policy was revised to a royalty-free version in 2002, several contributors--Philips Electronics, Avaya, and Rutgers University among them--do not want to cede their intellectual property claims. As a component of the W3C's Voice Browser Activity, VoiceXML is designed to help users interact with Web content and applications using natural and synthetic speech, touch-tone keypads, and prerecorded audio, while later applications could include Web access for drivers and the visually impaired. Jim Larson of the W3C explains that VoiceXML coordinates the interaction of other voice-browsing specs--Speech Synthesis Markup Language, Speech Recognition Grammar Specification, and Semantic Interpretation for Speech Recognition--to make it possible for a user to hold a "conversation" with a computer. The W3C plans to establish a patent advisory group in the hopes of settling the intellectual property dispute so that VoiceXML can be moved from candidate recommendation to full recommendation status.
- "Total Information Awareness: Down, But Not Out"
Salon.com (01/28/03); Manjoo, Farhad
The development of the Total Information Awareness (TIA) system may have hit a snag with the Senate's unanimous decision that the Defense Department conduct a cost-benefit analysis in order to study the project's potential impact on Americans' privacy and civil liberties, but this has not halted its progress. TIA, which aims to track down terrorists by combing databases for personal data, has drawn the ire of civil libertarians, politicians, and scientists, and adding fuel to their criticism is a recently disclosed report from the Defense Advanced Projects Research Agency's (DARPA) Information Systems Advanced Technology (ISAT) panel that discussed methods to protect private data in information systems. This study--and nothing else--was what the Defense Department furnished in response to a request from the Electronic Privacy Information Center (EPIC) for all information pertaining to TIA's privacy ramifications; ISAT study participants, including Barbara Simons, co-chair of the U.S. Public Policy Committee of the Association for Computing Machinery, are bewildered that this is all that DARPA provided, even though the report states that it is "not a review for Total Information Awareness." Simons says, "I'm just not convinced that the TIA will give us tools for catching terrorists that we don't already have or that could be developed with far less expensive and less intrusive systems." Among the security techniques and technologies the ISAT panel suggests is "selective revelation," in which computers withhold personal information from analysts unless they obtain legal authorization, and the construction of databases that leave an audit trail of any user abuses. Next month, DARPA is expected to allocate a three-year, $1 million grant to Palo Alto Research Center researcher Teresa Lunt to develop a "privacy appliance" to be incorporated into TIA's Genisys component. Former Rep. Bob Barr (R-Ga.) backed a 2002 bill calling for a "privacy impact statement" from the federal government every time it starts programs that could negatively affect civil liberties. However, Barr sees the recent Senate curbs on TIA as temporary, and says "chances are overwhelming" that the executive branch will revive the project.
For more information about ACM's U.S. Public Policy Commitee (USACM), visit http://www.acm.org/usacm.
- "The Lord of the Webs"
Washington Post (01/30/03) P. E1; Walker, Leslie
World Wide Web inventor Tim Berners-Lee has been working on a new schema for the Internet called the Semantic Web for the past four years. As head of the World Wide Web Consortium based at MIT, he coordinates scientific research and standards-setting for the project. Berners-Lee says the Semantic Web will be formed by attaching new code to information that lets computers read and understand its meaning. Much in the same way he used HTML, HTTP, and URL descriptions as the foundations for the current Web, Berners-Lee plans to use OWL (Web Ontology Language), RDF (resource descriptor framework), URI (uniform resource identifier), and DAML (Darpa agency mark-up language) to enable the Semantic Web. With the new system, users would be freed from having to search and analyze Web information themselves, instead leaving that task to their computers. Computers could also use information tags to display data in new ways beyond the Web browser, such as by color code or geographic maps. And despite his insistence that the Semantic Web is complementary and not in competition with current Web services efforts, Berners-Lee describes scenarios that allow users to automate Web tasks, which is the explicit goal of Web services. IBM director of Web services Bob Suter says the Semantic Web faces the hard task of getting people to tag their data with new descriptors. Berners-Lee admits that his idea faces the same obstacle that his original conception of the Web faced in 1989. "There's this mental leap involved," he explains.
Click Here to View Full Article
- "Project Seeks to Balance Power, Performance in Embedded Computers"
Virginia Tech computer and engineering professor Sandeep Shukla intends to develop strategies for optimizing performance and power usage in embedded computers. He says embedded computers already are pervasive in our everyday lives and that they will become even more common and more functional. Shukla is specifically working with networked wireless devices and using a probability analysis tool developed by Marta Kwiatkowska and her students at the University of Birmingham in England. He says small embedded computers often have to operate with low power requirements, which can adversely affect the processing speed of a handheld computer or signal quality of a cell phone, for example. By analyzing the usage frequency and performance requirements, Shukla intends to formulate strategies for balancing power usage with performance. The result will resemble the traffic light system highway engineers use to facilitate traffic flow. A cell phone could be fixed to go into "sleep" mode when usage is not expected, and be at the ready when calls are expected. Shukla also anticipates linking the capabilities of different embedded systems in the future, such as allowing vehicle systems to alert drivers to when they need an oil change, and where a service station is through GPS. He says, "Eventually, companies will use probability design in developing embedded computers for everything small wireless devices to large-scale computer networks."
- "Standard May Boost Chip Bandwidth"
ZDNet (01/29/03); Kanellos, Michael
The HyperTransport 2.0 specification, which will be incorporated into Advanced Micro Devices' (AMD) upcoming Opteron and Athlon 64 processors, is expected to be released by the HyperTransport Consortium in late 2003 or early 2004, according to consortium president Gabriele Sartori. HyperTransport 2.0 will be able to transfer data between chips at a rate of 20 Gbps for a 16-bit link or 40 Gbps for a 32-bit link. In comparison, current HyperTransport links boast a maximum data transfer rate of 6.4 Gbps for 16-bit devices and 12.8 Gbps for 32-bit devices. The advantages of HyperTransport servers include the elimination of a central system bus, which results in a significant reduction in memory latency, note analysts and AMD executives. This week saw the release of HyperTransport 1.05, an enhanced version of the HyperTransport 1.0 specification, while a program to help other companies make HyperTransport-interoperable products was also launched. Brian Wong of Primarion says that HyperTransport's speed could be accelerated even further with optical technology, and his company plans to propose an optical version of the technology before the year is out. AMD says that HyperTransport technology will help it to penetrate the mainstream market, while currently 39 HyperTransport-compatible products are commercially available.
- "The Apache of the Future"
NewsFactor Network (01/27/03); Brockmeier, Joe
Groups using the Apache Web Server to run their Web sites are unlikely to move quickly toward the new 2.0 version because the 1.3 version works fine and has more robust module support. Dirk Elmendorf of Rackspace Managed Hosting says version 1.3 has all the functionality his operations currently need, and that version 2.0 could possibly introduce some problems. He specifically cites the lack of official support for key modules such as mod_perl and PHP in version 2.0. Nevertheless, Apache Software Foundation Chairman Greg Stein says some high-powered sites have already switched over to version 2.0, and that new Linux distributions such as RedHat 8.0 contain Apache 2.0. Elmendorf also says that many customers using Windows have switched to Apache 2.0 because the new version offers much stronger support for non-Unix operating systems such as Windows. Still, Stein estimates that version 2.0 will not run on the majority of Apache Web Servers until 2005, after all major vendors have adopted it. And because version 1.3 is so robust, he expects that many organizations will not feel the need to switch until 2010. Stein points out that, like many open-source projects, the Apache Software Foundation is not a commercial entity, and therefore is not under pressure to push new products before users want them.
- "Security Clearinghouse Under the Gun"
CNet (01/29/03); Lemos, Robert
NGS Software managing director David Litchfield fired off an email this week sharply criticizing Carnegie Mellon's Computer Emergency Response Team (CERT) Coordination Center for what he terms "a betrayal of trust" in its disclosure of security vulnerabilities NGS submitted. He charged that CERT gave paying sponsors early warning of the flaws reported by NGS prior to alerting IT workers, and this is not first time the center has been taken to task because of this policy. Most security experts agree that the responsible thing to do is to help the software creator fix the flaw, then have the disclosure of the vulnerability coincide with the vendor's release of a patch. CERT Coordination Center manager Jeffrey Carpenter countered that the group has always made it clear that paying Internet Security Alliance members are given priority, but insisted that they must sign a nondisclosure form in which they agree not to leak any information they receive in order to maintain the security of the Internet. "We have tried to take a reasoned, middle-of-the-road approach to vulnerability information," he explained. "We do want critical-infrastructure and system operators to have a chance to take critical steps to defend their systems prior to a general release of information." Meanwhile, many other security companies delay reporting flaws to CERT until they are ready to publicly reveal them, and Chris Wysopal of @Stake said this situation undermines the CERT center's status as the major clearinghouse for security information.
- "Uniting with Only a Few Random Links"
Newswise (01/31/03); Ackerman, Jodi
Gyorgy Korniss of Rensselaer Polytechnic Institute is conducting research that could yield significantly improved parallel-computing simulation methods by employing "small-world" networking. Scientists often use large-scale computer networks to simulate complex systems, but Korniss' solution is designed to solve slowdowns caused by one system collecting data faster than another. "Enormous amounts of additional time or memory are required for computers to keep track of information they need from each other to create accurate simulations," he notes. Korniss' approach, which is detailed in the Jan. 31 issue of Science, involves connecting a computer to its closest neighbor and a few other computers at random. These individual units ensure they are synchronized by randomly "checking in" with each other. Korniss explains that his strategy applies the concept of six degrees of separation, in which any single person is linked to another through only a few other people, to intricate problem-solving network systems. His research receives funding from the National Science Foundation, the U.S. Department of Energy, and the Research Corporation.
- "Red Light, Green Light: A 2-Tone L.E.D. to Simplify Screens"
New York Times (01/30/03) P. E8; Austen, Ian
A surprise discovery by University of Amsterdam graduate student Steve Welter may lead to simpler and more flexible displays. While testing experimental organic light-emitting polymers (OLEDs) created at Philips Research under the direction of scientist J.W. Hofstraat, Welter found that one diode sample could be induced to glow red or green; this color change is effected when the direction of the electrical current running through the OLED is reversed. The component that makes the color switch possible is the metal additive dinuclear ruthenium, which was originally incorporated into the OLED polymer to increase the amount of visible light the diode yields when glowing red. The advantages of OLEDs include more power efficiency and brightness than liquid-crystal displays, and they can be printed out onto plastic. The dual-color breakthrough could pave the way for full-color displays that use only a repeating pattern of paired elements, as well as three-color organic diodes that glow red, green, and blue. George G. Malliaras of Cornell University believes the latter could theoretically be furnished with a variation of the dual-color diode that uses a second, higher voltage in addition to the reversing current. Hofstraat projects that displays made from the color-switching diodes will not emerge for three to five years. Meanwhile, Princeton electrical engineering professor Stephen R. Forrest has created via vacuum deposition an OLED-based pixel that displays the three primary colors and features stacked diodes; however, Malliaras says this sandwich configuration can complicate manufacturing, while building displays from the dual-color Philips OLED will require electronics that switch pixels on and off.
(Access to this site is free; however, first-time visitors must register.)
- "IEEE 802.16 Spec Could Disrupt Wireless Landscape"
EE Times (01/30/03); Wirbel, Loring; Mannion, Patrick
An IEEE committee researching the 802.16 wireless metropolitan-area network (MAN) specification has approved that technology for use in the 2- to 11-GHz range. Committee chairman and National Institute of Standards and Technology wireless director Roger Marks says 802.16 networks could act as a wireless backbone connecting 802.11 LAN hotspots throughout a city. He also said industry debate was inevitable as to the best possible use of 802.16 technology. Wireless MANs running on 802.16 operate on three levels: Single connections for special-purpose networks, mainstream 256-carrier orthogonal frequency division multiplexed function, and a 2,084-carrier multicast mode. Other 802.16 task groups are working on extending the functionality of the technology, including Task Group C, which is adding mobility to 802.16. Invariably, people who hear of mobile 802.16 assume it will supplant nascent 3G efforts, but the actual work focuses on users who are roaming at slow speeds and does not assume the type of coverage afforded by 3G. Support for 802.16, which has been under official IEEE study since 1999, is found in the WiMax Forum and leading wireless companies such as Intel. Intel's Sriram Viswanathan said 802.16 would serve as a second disruption in the wireless sector, after 802.11. Through his group, Intel has invested in smaller companies working on components and production of 802.16 products. Viswanathan says 802.16 would be suitable to serve as a 802.11 backbone and as a last-mile connector in cases where there is no wired infrastructure.
- "Why Voice over IP Is on Hold"
NewsFactor Network (01/30/03); Ryan, Vincent
The slow adoption of Voice over Internet Protocol (VoIP) technology in the enterprise can be attributed to various factors, according to experts. One of them is the economic slump, while another is little awareness of the technology's benefits among corporate customers, says Ralph Santitoro of Nortel Networks, who adds that his company has initiated a network assessment program in which business partners check the suitability of a company's Ethernet network for IP telephony. Americas at Vocaltec VP Bob VanSickle notes that most corporate adopters of VoIP are devoting their efforts to voice virtual private network (VPN) applications, which offer a relatively quick return on investment. However, Santitoro points out that the adoption of some VoIP applications may be hindered because they do not support certain services, such as emergency response. Slowing down the acceptance of IP devices that enable users to place calls over the Internet are technical difficulties--compression challenges, packet jitter, etc.--as well as security issues, according to VanSickle. Security is also a concern for VoIP handsets, which obviate the need for hackers to set up a physical link to eavesdrop. Meanwhile, interoperability between VoIP equipment is also critical to the technology's adoption, notes Frost & Sullivan's Jon Arnold. "Until you can throw everything into the pot and have it all work, no carrier in their right mind is going to have a large-scale deployment," he says.
- "No Hiding Place"
Economist (01/25/03) Vol. 366, No. 8308, P. 5
A surveillance-based society is emerging, thanks to people's increasing access to the Internet and the proliferation and advancement of technologies that can be monitored or are used for monitoring, including digital cameras, face-recognition software, and mobile phones. Opinion polls show that people are generally against near-constant surveillance, but the public is split between those who do not believe that it will become a reality, and those who feel powerless to prevent it from happening. Complicating the issue is the fact that privacy is subjective and difficult to define, while information-gathering by governments and corporations have their own individual quirks; most people are worried about the former group abusing such powers, although the latter group's hunger for data may make it the bigger threat to privacy in a networked society. Government legislation cannot sustain privacy alone because many national privacy laws could be rendered ineffective in a wired world, while the evolution of law is always several steps behind the evolution of technology. Technological solutions can be a problem as well, because individuals can only use such products and services by giving up information about themselves. Meanwhile, trusting companies that collect information to regulate themselves does not sit well with consumers, since many firms have more to gain by exploiting their customers' private data; market solutions by themselves are also likely to fail because they cannot keep up with increased public surveillance and expanding government databases. Other possible solutions include one proposed by physicist and sci-fi writer David Brin, who suggests that everyone be given database access, while another calls for the deployment of a biometric ID system that can tell exactly what kind of people are accessing databases. Unfortunately, neither solution seems very popular.
Click Here to View Full Article
- "Intelligent Storage"
Computerworld (01/27/03) Vol. 37, No. 4, P. 28; Mearian, Lucas
Storage devices imbued with intelligence, also known as object-based storage devices (OSDs), allow for limitless system scalability since they assume the low-level storage management duties previously completed by the storage server. Because those read/write blocks are not passed to the file server, input-output configurations are much more efficient and the file server is no longer a bottleneck in the system. Scott A. Brandt, assistant professor at the University of California, Santa Cruz's Storage Systems Research Center, says storage devices can be added to OSD systems just like hard drives are added to a PC. He notes that streamlined communications between file servers and storage devices results in fewer errors as well. The Storage Networking Industry Association and the International Committee for Information Technology Standards have joined to form the T10 Technical Committee, which is working out specifications for object-based storage. Storage vendor EMC has already released what experts say is the first true object-based storage arrays, called Centera. Besides more efficient networking and greater scalability, OSD systems also have better security because it is assigned to each individual object instead of the device.
Click Here to View Full Article
- "Recycling Tax Plan for PCs Due for Debate"
eWeek (01/27/03) Vol. 20, No. 4, P. 1; Carlson, Caron
In an effort to promote the recycling of electronic waste, Rep. Mike Thompson (D-Calif.) will likely introduce a bill next month that adds a maximum recycling tax of $10 to the purchase price of PCs, laptops, and monitors, according to aides. The EPA would channel this tax into grants for organizations that recycle, reuse, or resell computers, or refine them into raw materials. The proposal is an updated version of an unsuccessful bill Thompson supported last summer. The new measure is unlikely to get much support from U.S. computer manufacturers, who would rather wait to see how successful their own voluntary recycling programs are, explains Heather Bowman of the Electronic Industries Alliance. Meanwhile, several state legislatures are expected to debate their own recycling bills this year, but this could lead to a quagmire of differing state laws, a possibility that has made the industry amenable to some kind of federal regulation. Thompson insisted last week that a recycling tax would not affect the competitiveness of American electronics companies. Corporate computer users are concerned not so much with the tax itself, but rather with the potential cost and effort needed to dispose of obsolete equipment. "I would be willing to pay the $10 fee if I knew the product were going to be disposed of in a responsible manner," says Kevin Baradet of Cornell University's S.C. Johnson School of Management.
- "Building the Nanofuture with Carbon Tubes"
Industrial Physicist (01/03) Vol. 8, No. 6, P. 18; Ouellette, Jennifer
Carbon nanotubes offer many potential applications that run the gamut from flat-panel displays to super-strong fabrics to fuel cells to synthetic muscles, but the emergence and growth of the nanotube industry will depend on the development of a simple and cheap technique for mass production. When combined, nanotubes can be up to 100 times stronger than steel, according to Rice University's Richard Smalley; their structural perfection is evident at the atomic level; their hollow configuration makes them lightweight; they are highly conductive, can absorb ultraviolet light, and are transparent to visible light; and they can self-assemble. Nanotubes are grouped into two basic categories: Single-walled nanotubes (SWNTs) that have few structural defects and could be employed in computer circuitry, and multi-walled nanotubes (MWNTs) that are easier and cheaper to fabricate than SWNTs and could find use as structural reinforcement material. Academic institutions currently own the lion's share of nanostructural carbon production, but companies such as NanoDevices, Carbon Nanotechnologies, Hyperion Catalysis, and others have developed or are working on mass-production methods. MWNTs are already being used in lithium-ion batteries; the next probable nanotube application will be field-emission flat-panel displays featuring SWNTs. Also on the horizon are inexpensive chemical sensors made from nanotubes, portable low-power X-ray machines, and molecular-scale transistors. Smalley says "the single biggest limiting factor" to nanotube commercialization is the availability of high-quality material. Other technical hurdles to be overcome include nanotube agglomeration and their inability to remain ordered over long periods.