HomeFeedbackJoinShopSearch
Home

ACM TechNews sponsored by Looking for a NEW vehicle? Discover which ones are right for you from over 250 different makes and models. Your unbiased list of vehicles is based on your preferences and years of consumer input.    Looking for a NEW vehicle? Discover which ones are right for you from over 250 different makes and models. Your unbiased list of vehicles is based on your preferences and years of consumer input.
ACM TechNews is intended as an objective news digest for busy IT Professionals. Views expressed are not necessarily those of either AutoChoice Advisor or ACM. To send comments, please write to [email protected].
Volume 7, Issue 747: Friday, January 28, 2005

  • "New Search Tool Ranks I.T. Research Funding"
    Sci-Tech Today (01/27/05); Martin, Mike

    Researchers at Penn State University (PSU) have developed an automated technique for ranking funding agencies based on source acknowledgments in scientific papers indexed on the CiteSeer digital library of IT research. "We speculate that these measures could be used to evaluate the efficacy of funding agencies and programs both at the national and international level," explains CiteSeer developer and PSU professor of information sciences and technology C. Lee Giles. Giles and doctoral student Isaac Councill devised an impact measurement tool to assess funding agencies' impact based on citations and acknowledgments in documents. PSU's Margaret Hopkins says the researchers' acknowledgment-extraction algorithms were based on machine-learning methods for identifying and classifying data, and applied to 335,000 CiteSeer entries. The researchers determined that out of the 15 most-cited funding agencies, the National Science Foundation (NSF) was number one with 12,287 acknowledgments, while the Defense Advanced Research Projects Agency (DARPA) was number two with 4,712. However, in terms of the ratio of citations to acknowledgments (C/A), DARPA led the way with a C/A of 17.12, followed by the Office of Naval Research and then the NSF; Giles suggests that DARPA only funds well-entrenched researchers with a track record of high-impact work. UCLA electrical engineering professor Mikhail Simkin notes that "an interesting opportunity" may lie in a third metric uncovered by Giles and Councill's efforts. "If one can learn dollar amounts of grants, the ratio of this amount to the total number of citations in papers funded by those grants would be the 'average price of a citation in dollars,'" he says; thus, the PSU research can help extract a qualitative correlation between symbolic and economic capital.
    Click Here to View Full Article

  • "Grid Computing Takes the Linux Route"
    eWeek (01/26/05); Vaas, Lisa

    The recently launched Globus Consortium is significant because the group is committed to promoting open-source deployments of grid computing standards through the advancement of the Globus Toolkit, an open-standards-based array of tools that can facilitate enterprise-level grid implementations. Grid computing's migration from academic labs to the enterprise entails a number of challenges. The enterprise systems into which code must be embedded support a diverse number of interfaces, and Globus Consortium board member Ian Foster notes that security, enterprise work roles, and identity management have their own unique quirks that code will have to address differently. He says end users are clamoring for the open advancement of grid computing through open-source rather than proprietary standards, reporting that "there are people realizing there's a need to define appropriate protocols so these systems can interoperate and people don't have to deploy a single platform across the enterprise if they don't want to." Analyst Nick Gall describes the Globus Consortium as grid computing's answer to the Open Source Development Labs, which is readying Linux for enterprise use. Meanwhile, the purpose of the Enterprise Grid Alliance (EGA), as defined by Sun Microsystems' Peter ffoulkes, is to function as a "body that is an advocate for the needs of the commercial enterprise computing user and to accelerate the process by which grid technologies become usable by the commercial, for-profit enterprise customer." He says the EGA has been attempting to entice end-user customers while trying to fulfill community, not just vendor, requirements, and has started to establish technical working groups to provide unified communication and other specific marketplace needs.
    Click Here to View Full Article

  • "W3C, IETF Stick With 'Web Glue' Standards"
    InternetNews.com (01/26/05); Boulton, Clint

    A new standard and specification created by the World Wide Web Consortium (W3C) and the Internet Engineering Task Force (IETF) are designed to expedite users' exploitation of Web resources by allowing Uniform Resource Identifiers (URIs) and Internationalized Resource Identifiers (IRIs) to be tapped more efficiently. The IETF's STD 66 URI: Generic Syntax standard, which was authored by W3C director Tim Berners-Lee, Day Software's Roy Fielding, and Adobe Systems' Larry Masinter, boasts a number of advantages over its predecessor, RFC 3986. Those advantages include support of domain names throughout the Web, a spruced-up "Normalization and Comparison" section, determination of equivalent URIs, accommodation of security considerations, and a rule for absolute URIs with optional fragments. Meanwhile, the RFC 3987 IRIs spec, written by the W3C's Martin Durst and Microsoft's Michel Suignard, permits users and content developers to identify Web sources in their own language. The W3C says the IRI spec, which was co-developed by the consortium's Internationalization Working Group, will benefit XML, XHTML, and other specs through its support of international characters. The consortium expects the switch from URI to IRI to be smooth because every URI is essentially already an IRI, in the sense that "URI users do not need to do anything differently in order to find what they need on the Web." URIs are described by the W3C as the "glue that holds the Web together."
    Click Here to View Full Article

  • "Google's Search for Meaning"
    New Scientist (01/28/05); Graham-Rowe, Duncan

    The employment of search engines to uncover the meanings of words may pave the way for truly intelligent computers, and Google and other search engines are taking steps in this direction by scanning and indexing massive volumes of text. "The Web might make all the difference in whether we make an artificial intelligence or not," notes Cyc project scientist Michael Witbrock, who thinks the Web will facilitate computers' acquisition of a comprehensive knowledge base. A word's meaning can be inferred from the words used around it, and this technique formed the basis of a project undertaken by Rudi Cilibrasi and Paul Vitanyi of Amsterdam's National Institute for Mathematics and Computer Science, who used Google searches to measure the relationship between words. The researchers devised a statistical indicator that gauges the logical distance between two words based on the number of hits returned, which Cilibrasi and Vitanyi call the normalized Google distance (NGD). A small NGD indicates that words are very closely related. Vitanyi says repetition of this process for numerous word pairs can yield an NGD map from which the computer can deduce meaning, which "could well be the way to make a computer understand things and act semi-intelligently." The researchers say the method has successfully discriminated between numbers, colors, religions, and Dutch painters. Meanwhile, the Cyc project aims to build a vast electronic encyclopedia of fundamental human understanding that includes facts, rules of thumb, and word meanings.
    Click Here to View Full Article

  • "Adaptive Lights Organize Traffic"
    Technology Research News (02/02/05); Patch, Kimberly

    Free University of Brussels researcher Carlos Gershenson has developed a method that enables traffic lights to configure themselves to improve traffic flow, modeled after the self-organization principles of social insects. The technique builds on a U.K. scheme for coordinating traffic at remote intersections, in which a red light notes how many cars approach and multiplies that number by time increments; the light is programmed to change once a certain threshold is reached, so that the light turns green faster as the number of cars approaching or waiting at a red light increases. Gershenson adapted the technique to work in heavy traffic by inserting a rule that a green light cannot change to red before a minimum period. Rather than have the lights communicate with each other directly, Gershenson's scheme facilitates indirect traffic coordination by having the lights respond to local traffic density, using the environment to organize themselves in the manner of social insects. "There is no need of the central command center, communication between agents, nor hierarchies," notes the Free University researcher. In simulation, Gershenson's system successfully lowered wait times and boosted average speeds higher than traditional traffic management systems. The next challenge involves refining the method to account for pedestrians and left-hand turns, and then running a pilot study and comparing performance with that of current adaptive traffic management schemes. Gershenson estimates that a practical application of this system could be ready in a few years.
    Click Here to View Full Article

  • "Bridging India's Digital Divide With Linux"
    Asia Times (01/28/05); Devraj, Ranjit

    With backing from the Indian government, open-source Linux software is bridging the digital divide in India, as well as providing for some interesting innovations in the handheld computing space. The Indian army plans to equip its soldiers with a Linux-run handheld device that will help them coordinate actions on the battlefield: Called SATHI (situational awareness and tactical hand-held information), the device is a derivative of the Indian Institute of Science in Bangalore's Simputer, which was meant to provide cheap desktop computing to the masses. Indian government officials are looking to Linux and other non-proprietary software to help deal with rampant software piracy. The government is working with IBM to help transform public IT, so that academic and government institutions operate using Linux software. IBM's Linux Center of Competency in Bangalore provides consulting, education, and certification, and will play a "significant role in the worldwide Linux community," says IBM South and Southeast Asia Linux general manager Jyoti Satyanathan. Delhi Linux Users Group member and professor Edwin Wells says single-disc Linux distros that allow people to run Linux without installing it on the hard drive will help spur adoption, and he notes that such capability is not possible with Windows. Indian public-sector firm Bharat Electronics has developed a mobile version of the Simputer that includes wireless capability; the computer has a slot for smart cards so that shopkeepers in rural areas can rent out the device, with each user only having to invest in a smart card housing their personal profile and data. The World Social Forum held in Brazil as a developing-world counterpoint to the World Economic Forum is using open information technology this year, with some 1,000 desktop computers using free software and the organization's Web site designed using open source php language.
    Click Here to View Full Article

  • "Designed to Learn Faster by Better Reading Online"
    IST Results (01/28/05)

    Researchers in IST's E-Tracking project tracked people's eye movements to study methods for gauging an e-learning system's functionality, usability, and acceptability. "Where eyes go to on a page, how long they gaze at a specific area, where and when they shift to other areas, as well as someone's attention level--these parameters provide rich information, acting as objective and quantitative indicators of a text's level or difficulty," notes E-Tracking project coordinator and professor Daniela Zambarbieri. Employed in the project was the EyeGaze, a machine that records how a person's eye moves when studying a computer screen through a camera and a monitor that affords real-time observation of the eye. Software developed by the project partners obtained signals from EyeGaze as well as from certain keyboard and mouse movements, while the scan path of volunteers' eye movements was studied by additional algorithms. The effort has yielded a viable scheme for evaluating material rendered in an electronic format, as well as common-sense guidelines for e-learning system development. Zambarbieri says the guidelines stress the importance of a clean layout, which makes electronic pages easier to explore visually; scroll-free pages, unified text and image organization, and a clear explanation of a course's structure at the outset are other recommendations. The results of the project will be distributed via e-learning specialist Giunti, which Zambarbieri says has outlined a marketing strategy for the E-Tracking software. The software could not only enhance the development of e-learning systems, but also the design of Web sites and vehicle instrument consoles.
    Click Here to View Full Article

  • "Academics Unite to Debug Speech Systems"
    Computerworld Australia (01/27/05); Crawford, Michael

    Jan. 27 will mark the official launch of the Human Communication Science Network (HCSNet), an initiative led by Australia's University of Western Sydney and Macquarie University with the goal of developing automatic speech recognition systems that can adapt to a user's psychological state, as well as identify and respond appropriately to agitated callers. "This network will break the conventional interdisciplinary research links and forge new collaborations across fields as diverse as psychology, computing, linguistics and language development, acoustic science and auditory neuroscience," explains HCSNet steering committee chairman and professor Dennis Burnham. HCSNet convenor and professor Robert Dale says the network will coordinate research into more "human-like" human-machine communication rather than artificial human development. Project participants include 17 Australian universities, public and private Australian companies, and Japanese, European, American, Canadian, Asian, and British research institutes. "Not only will we provide the vehicle for unique collaboration, but we will also train and mentor younger, less experienced researchers so they can continue the quest for more effective communication tools in years to come," declares professor Kate Stevens, chair of HCSNet's training and development committee. Other research areas the initiative could concentrate on include booking-system machine translation, the intergration of IT with auditory science, and boosting the efficiency of data mining and information retrieval. The network is sponsored by a five-year, $2 million Australian Research Council grant.
    Click Here to View Full Article

  • "The United States' Battle to Secure Cyberspace"
    CNet (01/26/05); Lemos, Robert

    Departing Homeland Security Department assistant secretary Robert Liscouski says in an interview that the department's cybersecurity efforts are more significant and positive than critics claim, and cites a list of achievements that include an approximately seven-fold budget increase in less than two years; the launch of the National Cyber Alert System, whose direct subscriber base is 270,000 strong; raised situational awareness in the cyber-community via the US-CERT Web site; and the setup of a round-the-clock cybersecurity readiness and response system that tracks incident and trend data. Liscouski disputes the argument that people are more concerned with physical threats than cyberthreats, arguing that the visibility of physical threats and responses to those threats in the media creates the assumption that cybersecurity, which is not as visible, is being downplayed. He says the DHS has taken coordinated action to mitigate, probe, and gain insights on cyberattacks, and says with confidence that "we're surely not going to turn a blind eye to cyberspace so we can have a 9-11 version of a cyberwar." Liscouski ascribes the departure of key personnel in the DHS cybersecurity division, including himself, to "regular government turnover." He says he is leaving because he has fulfilled his two-year commitment. Liscouski says the private sector needs to take its responsibility toward the user community more seriously and address its own cybersecurity issues, and he promises to continue lobbying for increased cybersecurity awareness and education among users both inside and outside the government. Liscouski contends that the private sector, the National Cyber Security Partnership, is fulfilling its role in the DHS, and notes that the software assurance and governance working groups have made an especially significant contribution.
    Click Here to View Full Article

  • "Consortium Attracts Techno-Experts"
    Scoop (NZ) (01/27/05)

    The Human Interface Technology Laboratory New Zealand (HIT Lab NZ) will play host to over 200 attendees from the fields of science and education at the international Virtual Worlds Consortium in February. The theme of the two-day conference is "Industry Creativity Research: Partners in Innovation," and its focus will be the forging of alliances between creativity, research, and industry, which are the ingredients of successful collaboration. Discussions will revolve around how future technologies can be developed from partnerships between industry and academia, collaboration between artists and technologies, next-generation Internet, mixed reality gaming, and other topics. Keynote speakers will include the University of Saskatchewan computer science professor Carl Gutwin, who has wide-ranging expertise in human-computer interaction and computer-supported cooperative work, with particular emphasis on how groupware systems can better uphold the fluent and natural interaction of face-to-face cooperation. A keynote address will also be delivered by MIT professor of media arts and sciences Hiroshi Ishii, who directs the MIT Media Lab's Tangible Media group and co-directs its Things That Think Consortium. Ishii is a frequent collaborator in projects that meld diverse arts, scientific, and design fields. Cutting-edge HIT Lab technologies and applications will be highlighted at the consortium. HIT Lab NZ collaborates with its American counterpart at the University of Washington in Seattle in the development of radical new interfaces designed to revolutionize human-computer interaction.
    Click Here to View Full Article

  • "Battle Bot: The Future of War?"
    Christian Science Monitor (01/27/05) P. 14; Lamb, Gregory M.

    Military strategy could undergo a dramatic transformation if the majority of fighting is carried out by robots, a vision that will be one step closer to reality when semi-autonomous "battle bots" that fire weapons are deployed in Iraq this spring. U.S. armed forces are hoping robots can reduce casualties, lower costs, and improve mission performance. Robots already in use in Iraq include the missile-firing Predator unmanned reconnaissance aircraft and bomb-disposal machines such as the PackBot, which can climb stairs, negotiate rough terrain, and grip objects. Research and development is also proceeding on Robot Extraction Vehicles that can retrieve injured soldiers from the front lines, and automated machines that can perform reconnaissance, supply transportation, and perimeter protection. Battle-bot technology is being primarily driven by the simplification and improvement of off-the-shelf components such as global positioning systems, which are cheap enough to make proprietary system development increasingly unnecessary. Battle bots cannot become fully autonomous until their vision systems are advanced enough to both perceive and interpret their surroundings, and the Defense Advanced Research Projects Agency plans to spur progress in this field through a "grand challenge" to develop unmanned vehicles that can successfully traverse a 175-mile off-road course on their own. Analyst John Pike notes that fewer soldiers could be needed in the field with robots in place, but warns that the machines could emotionally distance their operators from the business of warfare. He also wonders whether war would become easier, especially if other countries develop battle bots of their own. He says, "What would it look like if millions of Chinese robots came crawling out of the Pacific Ocean and started storming across California?"
    Click Here to View Full Article

  • "The IT Industry and the Nigeria Computer Society"
    allAfrica.com (01/27/05); Nwannenna, Chris C.

    The Nigeria Computer Society (NCS) will continue its campaign to develop indigenous IT development skills this year, with a host of conferences, a revised syllabus for computer professionals certification, and support of national IT legislation. Creating IT capacity within the country itself is the only way to ensure Nigeria will be a developer of IT products and services, not only a consumer, writes NCS President Chris Nwannenna. Accordingly, establishing the country's Software Development Institute is a primary goal for NCS this year: The institute would provide development skills as well as promote best practices in software engineering. The National Assembly is set to deliberate on an IT Policy Bill this year, with express support from Senate President Adolphus Wabara; the legislation will review relevant laws and explore new opportunities to develop indigenous IT capabilities. NCS also sees banking reform as a significant opportunity and is set to work with the Central Bank: A workshop will be held to disseminate expertise on banking automation and systems integration that are key to supporting banking reform. NCS will also host the second annual Software Developers Summit that last year drew more than 1,000 participants, with this year's conference focusing on Web applications, ERP systems, and standards, while also featuring a software competition for computer science students. Nigerian students and professionals will benefit from a syllabus review for NCS' computer professional examinations this year, while a Computer Science Curriculum Conference is set to draw academics, government officials, and other experts to discuss computer science education. NCS' eighth international conference will feature international experts, including a professor from the Georgia Institute of Technology, a NASA computer expert, and a consultant to the Chinese government.
    Click Here to View Full Article

  • "Faculty Members Brief Industry Partners at CNS Research Review"
    University of California, San Diego (01/26/05); Ramsey, Doug

    Researchers from the University of California, San Diego's Center for Networked Systems (CNS) described developments in their wireless and grid computing projects at the center's first formal research review. About a dozen graduate students are working on seven projects in CNS, funded in part by five industry members who attended the review--AT&T, Alcatel, Hewlett-Packard, QUALCOMM, and Sun Microsystems. CNS director and computer science professor Andrew Chien briefed on his work with the San Diego Supercomputer Center to model large-scale dynamic Internet and grid behavior, as well as another project creating models and resource management functions for enterprise and grid infrastructures. Electrical and computer engineering professor Anthony Acampora is leading a small team to create free-space optical technologies that will help solve the last-mile problem needed to provide small businesses with high-speed network access. "I believe this is one of the last true remaining problems in modern telecommunications," he said. Computer science professor Stefan Savage is combining with the NSF's Collaborative Center for Internet Epidemiology and Defenses to create technology that could automatically characterize emerging Internet-borne malware activity. CNS plans to choose another seven projects this summer and recruit more industry members as it expands, said Chien. The program offers internship opportunities for students, as well as collaborative opportunities between CNS projects, industry members, and other university research groups such as the California Institute for Telecommunications and Information Technology (Calit2).
    Click Here to View Full Article

  • "It's Not All in Your Head"
    Wired News (01/27/05); Dotinga, Randy

    The Virtual Reality Medical Center in San Diego is one of roughly 10 such clinics in the United States that treat phobics, disaster victims, and soldiers with post-traumatic stress disorder by putting them in virtual-reality scenarios. "Exposure therapy" does not eliminate people's fundamental fears, but helps them cope with situations by learning to control their actions under pressure, while decreasing prices for computing power and virtual-reality equipment means psychological treatment using virtual reality is changing rapidly. Researchers intend to add to virtual reality simulations by adding touch sensation and more interactive capabilities, eventually enabling the type of immersive "holodeck" scenarios shown in Star Trek. There are still serious obstacles to reaching that goal, including relatively cheap but limited virtual reality helmets that University of Washington researcher Hunter Hoffman describes as looking into a neighbor's backyard through a crack in a fence. Some simulations feature human figures that look less detailed than characters in the Sims game, but graphics realism is improving quickly, such as with an airport scenario at the San Diego treatment center that uses digital photos and real audio from the San Diego International Airport. Patients who fear airports can learn to deal with situations by going through the airport ticket counter, food court, and security checkpoints. Doctors are on hand to monitor breathing rates, pulse, and perspiration, and pinpoint foundations of people's fears. Hoffman says current virtual reality treatments do not need to be ultra-realistic in order to be effective since the obvious computer simulation helps patients tolerate the scenario.
    Click Here to View Full Article

  • "Narrowing the Search"
    Computerworld (01/24/05) P. 23; Hildreth, Sue

    Enterprise search technology has diversified tremendously in the last five years to provide businesses with a number of customization options: Enterprise search technology now covers taxonomy classification, text analytics, and behavioral analytics to help companies refine their search strategies. In terms of the search capability itself, enterprise search differs from public Internet search in that it relies more on content rather than comparison of source links; this is because users have a better idea of what they are looking for in enterprise search, says Stanford Linear Accelerator Center (SLAC) experimental support professional Douglas Smith, whose group uses the open-source Swish-e search tool for its internal newsgroup and another more sophisticated tool for unstructured enterprise content. SLAC uses a search engine from Ultraseek tailored to avoid content "black holes" that offer no valuable content but trap indexing spiders. Search vendors also provide tools that give businesses more control over how their content is categorized and even weighted during the ranking process. Each enterprise search implementation is unique to the organization's needs, says IDC analyst Susan Feldman. People's Bank in Connecticut uses behavioral analytics to boost the accuracy of its results and produce more "conversions" where searches led to the purchase of a financial product; People's Bank senior information architect Ross Jenkins was able to create special "landing pages" for searches determined to be high-value, for example. The World Book Encyclopedia's Web encyclopedia is a great example of how enterprise search works with multimedia, as the publication's site houses thousands of map images, audio clips, and videos. Industrial manufacturing portal ThomasNet.com pulls outside content from vendor Web sites, and then combines it with content from its own database.
    Click Here to View Full Article

  • "Sensors Everywhere"
    InformationWeek (01/24/05) No. 1023, P. 32; Ricadela, Aaron

    Wireless sensor-network technology is finding application in fields as diverse as homeland security, remote equipment monitoring, and supply-chain management, and it has the potential to constitute the next multibillion-dollar tech market. Battery-powered sensor "motes" are composed of a circuit board with networking and software, interfaces that read environmental fluctuations, and a wireless radio to transmit this information; furthermore, "mesh networking" software promotes power efficiency by enabling each sensor to activate only when it has to transmit data, and then send that data to neighboring motes rather than to a remote base station. Harbor Research estimates that the population of wireless sensors currently in use could expand from about 200,000 to 100 million between now and 2008, while the worldwide wireless sensor market could be worth more than $1 billion by 2009. The most immediate profits to be realized from the sensor net technology will likely come from replacing corporate operational monitoring systems with wireless sensor nets. A number of challenges must be met in order to create new markets, including the establishment of additional industry standards so software vendors have a unified methodology for compiling data from sensor nodes of disparate manufacture and of varying capability. Another challenge is to insert software in sensors to enhance their selectivity of what data to transmit or to condense data as a power-saving measure. There is also an absence of software tools that can program entire sensor nets in a single instance, while Microsoft researcher Feng Zhao reports that writing software "will probably be a stumbling block between sensors and killer apps."
    Click Here to View Full Article

  • "Innovation Ships Out"
    CIO (01/15/05) Vol. 18, No. 7, P. 62; Koch, Christopher

    Offshoring product research and development to lower-wage countries promises tremendous cost savings, but some economists warn that the trend could seriously erode U.S. companies' innovation and competitiveness. National Institute of Standards and Technology economist Gregory Tassey reports that the proportion of R&D money channeled into new innovation has been shrinking over the past dozen years, while incremental product development spending has been rising; similarly, government R&D spending has fallen behind many nations. Companies that outsource R&D will require IT connections to facilitate improved collaboration among engineers, while CIOs will have to replace direct process oversight with automated monitoring and reporting for effective supply-chain management. Electronics outsourcing's phenomenal growth belies the fact that it is not a very profitable business, and offshore electronic manufacturing services (EMS) companies can get out of this rut through innovation. These companies are becoming more and more innovation-capable as they take on more design work and expand their engineering groups. Tassey warns that traditional R&D powerhouses will suffer as a result, since "Real breakthrough product development usually requires manufacturing and research to be located together." An EMS that can handle all aspects of the product process--design, manufacturing, and shipping--combined with relatively cheap IT controls that allow original equipment manufacturers (OEMs) to confirm that the EMS is doing quality work, makes outsourcing all the more appealing to the OEM. "All of the [EMS] companies realize they have to get involved in the design effort at the early stage because that's how the business is won today," notes Needham & Co. managing director John McManus.
    Click Here to View Full Article

  • "Best-Kept Secrets"
    Scientific American (01/05) Vol. 292, No. 1, P. 78; Stix, Gary

    The emergence of quantum computing could spell the doom of traditional public-key cryptography, since quantum computers are theoretically capable of quickly carrying out the massively complex factorizations needed to break the encrypted secret key transferred between sender and receiver. This development would make quantum encryption one of the few practical security options. In a quantum cryptographic scheme, a series of quantum bits (qubits) represents the encryption/decryption key, and an eavesdropper's slightest attempt to intercept the message is made plain to both sender and receiver because of Heisenberg's uncertainty principle, which dictates that the measurement of one property in a quantum state will upset another. Early quantum-cryptographic key transmission was limited to a range of 30 centimeters, but both id Quantique and MagiQ Technologies sell products that extend that range, while NEC, which successfully transmitted quantum-encrypted data 150 kilometers, is expected to debut a product of its own as early as 2006. The Defense Advanced Research Projects Agency has sponsored the setup of a proof-of-concept multi-institution network that uses quantum-encrypted keys. The desire to extend quantum key transmission range even further has prompted researchers to investigate an alternative medium to optical fiber, with the ultimate goal being a quantum repeater that taps the phenomenon of photon entanglement. A successful quantum repeater will require a quantum memory that stores qubits without corrupting them before they are broadcast to a subsequent network node, and Georgia Institute of Technology researchers recently outlined a scheme that uses entangled atoms instead of photons to effect long-distance qubit transmission.
    Click Here to View Full Article

  • "Extensible Programming for the 21st Century"
    Queue (01/05) Vol. 2, No. 9, P. 48; Wilson, Gregory V.

    Future programming languages must offer extensible information support and processing through three integrated technologies: Compilers, debuggers, linkers, and other tools that are pluggable frameworks rather than monolithic command-line applications; programming languages that support extensible syntax; and programs that are stored as XML documents to enable uniform data representation and processing. Making debuggers and the like frameworks for plug-ins would give programmers a clearer picture of what those programs are doing. The general adoption of language extensibility has been hindered by a lack of familiarity among programmers (a problem that code generators and wizards are gradually addressing), scarce mainstream language support (which should erode as programmers familiarize themselves with program transformation tools), and a cognitive gap between what programmers write and what they must debug, which can only be scalably closed by allowing programmers to control the handling of generated code by linkers, debuggers, and other tools. Future programmers will make the switch from passive libraries that contain only code and data for the final program to active libraries that include final application content as well as content analysis, optimization, and debugging instructions for processing tools. The final component of next-generation programming systems is the storage of code as XML rather than flat text, which promises to boost languages' extensibility, streamline active library construction, and allow programmers to customize their software views while keeping the underlying model unchanged. The biggest obstacle to this breakthrough is social resistance, given security issues, concerns about subtle interactions between language features, and programmers' opposition to storing programs in a non-flat ASCII format.
    Click Here to View Full Article


 
    [ Archives ]  [ Home ]

 
HOME || ABOUT ACM || MEMBERSHIP || PUBLICATIONS || SPECIAL INTEREST GROUPS (SIGs) || EDUCATION || EVENTS & CONFERENCES || AWARDS || CHAPTERS || COMPUTING & PUBLIC POLICY || PRESSROOM