Association for Computing Machinery
Welcome to the October 8, 2008 edition of ACM TechNews, providing timely information for IT professionals three times a week.

HEADLINES AT A GLANCE


Study of Data Mining for Terrorists Is Urged
New York Times (10/08/08) P. A20; Lichtblau, Eric

All U.S. government programs that use data-mining technology to search through databases to find clues on terrorist activities should be evaluated to determine if they are effective or even legal, concludes a new National Research Council (NRC) report. The U.S. government has been aggressively using data-mining tools since the Sept. 11 attacks, as counter-terrorism officials in many intelligence agencies have sought to analyze records on travel habits, calling patterns, email, financial transactions, and other data to isolate possible terrorist activity. The National Security Agency's practice of wiretapping terror suspects without warrants, screening suspicious airline passengers, and the Pentagon's now-defunct Total Information Awareness program all have relied on data mining. The NRC report warns that successfully using these tools to deter terrorism will be extremely difficult due to legal, technological, and logistical problems. The report says a haphazard approach to using data-mining tools threatens both Americans' privacy rights and the country's legitimate national security needs. Data mining has been shown to work in commercial settings to predict product trends and detect credit fraud, but there is little evidence to confirm that data mining can be used to find terrorists, the report says. Part of the problem is that the sample of known terrorists and actual attacks is so small that it is difficult to establish a pattern of suspicious behavior that can be used to find other terrorists. The rush to accumulate enormous amounts of information also increases the risk of a significant number of false leads that could implicate innocent people.


CERN Grid May Boost Drug and Climate Research
IDG News Service (10/03/08) Niccolai, James

The Worldwide LHC Computing Grid, developed for use with the Large Hadron Collider (LHC), also is being used to research new drugs and determine the causes of climate change. Scientists at the European Organization for Nuclear Research (CERN), which operates the LHC, say that one third of the experiments being conducted today on the grid are in non-physics areas such as life sciences and biomedical engineering. "The simulations that can be done on these resources is a kind of science that wasn't possible before," says LHC Computing Grid project leader Ian Bird. "Now we see the type of collaboration that can be generated on the fly." CERN IT department head Wolfgang von Ruden says the grid has encouraged opening the digital infrastructure for science on an unprecedented scale. He says the grid could has as much impact on people's lives as the World Wide Web, potentially enabling people to send medical records for analysis on the grid. The grid uses a tiered structure that fans out from CERN to progressively smaller computing centers around the world. The LHC is expected to produce about 15 petabytes of data per year, which will be recorded locally at Tier 0, the CERN Computer Centre, and filtered and distributed to 11 Tier 1 computer centers, including seven in Europe, three in North America, and one is Asia. The data will then be distributed to about 160 Tier 2 sites with enough computer and storage capacity for specific analysis tasks.


Chip Firms Embrace Low Power
Wall Street Journal (10/08/08) P. B3D; DiColo, Jerry A.

Texas Instruments, Analog Devices, STMicroelectronics, and many smaller companies are developing ultra-low-power and "no power" microchips capable of running on the minute amounts of energy generated by changes in temperature or structural vibrations. Developers believe these chips could be used in a wide variety of real-world applications, driven by changes in how industries gather data and transmit simple information. Low-power chips could be used to draw power from a patient's skin or from cars traveling over a road surface. "There are serious sources of power around us," says Texas Instruments director of research and development Martin Izzard. "We're able to make practical, useful systems down at the milliwatt or lower range. And at that range, it becomes possible to actually look around us and harvest some energy out of the environment." Although energy harvesting has been used for some time, researchers say the next phase involves remote networks of tiny sensors that use minute amounts of power to provide data. Such networks could be used to measure the stability of bridges and roads, or monitor blood pressure and other health information without having to hook a patient up to a machine. "All we need to be able to do...is harvest one thousandth of the power from a light bulb or from your body in order to run a practical application," Izzard says.
View Full Article - Link to Publication Homepage | Return to Headlines


Catching Smugglers With 'Guilt' Technology
University of Bradford (10/02/08)

Researchers at the United Kingdom's University of Bradford (UB) want to increase security at border controls by developing a computer system capable of detecting guilt. UB professor Hassan Ugail is part of a team working to develop technologies that will assist border control agencies in identifying people attempting to smuggle contraband goods or narcotics. The project, which will start in December 2008, will last two years, and is being funded by the Engineering and Physical Sciences Research Council. The main goal of the project is to develop technology capable of profiling people as they go through border controls to help security agencies identify smugglers. The technology will use real-time dynamic passive profiling technology based on the modeling of facial expressions, eye movement, and pupil changes in both the visual and thermal terms, which will hint at malicious intent and physiological processes such as blood flow, eye movement patterns, and pupil dilation. "What we are proposing to develop is essentially a passive lie detector," Ugail says. "We aim to automatically analyze peoples' facial expressions and eye movements in response to a series of questions through video images and computer-based intelligent algorithms." Ugail says the system will be able to detect non-visual domains through the use of thermal imaging to study facial blood flow, which is extremely hard to control.


Multicore: It's the Software
Computing Community Consortium (10/07/08)

Developing effective programming abstractions and tools that hide the diversity of multicore chips and features while exploiting their performance for important applications is the major challenge facing software researchers, writes Microsoft's Dan Reed. Reed says a vibrant community of researchers is needed to explore diverse approaches to parallel programming, including languages, libraries, compilers, and tools, and their applicability to multiple application domains. He says Microsoft's researchers are investigating numerous approaches, from coordination languages for robots and distributed systems, to mobile phones, desktops, and data center clouds. In the future, Reed says that spatial computing, in which real-time vision and speech processing are paired with knowledge bases, distributed sensors, and responsive objects, could enhance human activities in contextually relevant ways while remaining otherwise unobtrusive by adapting to its user's needs and behavior seamlessly, across home, work, and recreation environments.


Boosting the Capabilities of Emergency Relief Efforts
ICT Results (10/06/08)

The European Union-funded STREAM project has developed a new IT platform to support a variety of standards that will enable organizations speaking different languages and using various technologies to access information such as satellite images, photographs, and maps during emergency relief efforts. The platform consolidates information so it can be viewed from a single entry point by everyone who needs access to the data. STREAM project coordinator Hichem Sahli says the project has three main objectives. The first is enabling workflow management that ensures an emergency response headquarters can monitor who is doing what and where. The second is creating a harmonized description of what people are doing, and to collect information that can be shared by different organizations in the field. Sahli says this is to prevent situations where two organizations are working side by side but not talking to one another. "Even if they do talk, the data they are collecting is not made use of by both organizations because they are not coded in the same way and don't have the same meanings, so there is a great deal of duplication of effort," he says. The third objective is to deal with data archiving and providing free access to data to decision makers and field workers. Sahli says that archiving such data and making it freely available creates a new resource for anyone who could use the data.


Symposium Features Latest Wearable Gadgets
Pittsburgh Tribune-Review (09/30/08) Heinrichs, Allison M.

At the 12th annual International Symposium on Wearable Computers, T-shirts serve a triple purpose for Carnegie Mellon University professor Asim Smailagic. Not only do the shirts cover the backs of attendees and advertise the conference, they light up to tell the wearers whether a laptop computer can get a wireless Internet connection. "Clearly, it's an effective, smoothly integrated technology," Smailagic says. "Imagine if we embedded something like a MP3 player; you could have LED lights flash and vibrate in rhythm with the music." Nearly 100 scientists attended the symposium, where wearable computers ranged from mobile phones to electronic textiles. Alzheimer's patients and people with memory problems could carry a device that recognizes other people and reminds them of who they are, or record daily activities, says Carnegie Mellon Human-Computer Interaction Institute professor Daniel Siewiorek. Smailagic says the applications for wearable computers go far beyond medical situations. He wears a bulky green wristwatch that has a light sensor and microphone that work together to determine if he is engaged in a conversation, sitting in his office, or at the movie theater, to decide whether to interrupt him to take a cell phone call or send the call to voice mail. The eWatch also has an accelerometer that allows Smailagic to access the watch's controls using movement, which could enable a person to answer a phone or change radio stations when driving without pressing buttons or taking their eyes off the road.


Computer Imaging to Read Face to Detect Threat
Ians News Service (10/02/08)

A computing imaging process developed by computer scientists at Concordia University in Montreal could enable experts to study human facial expressions and determine whether an individual poses a public threat. The computer imaging processing system would be designed to detect and classify the expressions on a human face, focusing on muscles near the eyes, nose, and mouth. Professor Prabir Bhattacharya and graduate student Abu Sayeed Sohail developed the computer imaging processing system. "Bhattacharya and Sohail's system measures 15 key points on the face and then compares these measures against images of identifiable facial expressions," according to a statement from the university. They also have identified seven basic facial expressions shared by all humans. The system could potentially be used to analyze photos of people at an airport or other high-traffic areas. "If one could take random photos of the crowd and process them fast enough, there is the potential to identify those individuals who might be problematic," the university statement says.


To Protect Your Wireless Network, Break It Up
Computerworld Canada (10/01/08) Ruffolo, Rafael

Ryerson University professor Isaac Woungang says breaking up electronic communications into various pieces could lead to greater security on mobile wireless networks. Woungang is developing a multi-path routing system that will divide and encrypt wireless emails and data files. "Most of the work on message security has been focused on developing application-specific cryptographic security schemes," he says. "This new methodology is embedding security into the message routing." Beyond breaking up the messages into small pieces for individual encryption, the system also sends each message on a variety of paths to reach its destination, which Woungang says acts as a trust mechanism because it makes it difficult for malicious intruders to gain access to a relevant amount of the message. Each encrypted piece and message delivery node is assigned a trust level based on specific criteria established by the network. As the message travels along its route, security is reexamined at every state to ensure trustworthiness. Less trusted delivery nodes are given a small number of message pieces to handle, Woungang says. Tech Research Group analyst Mark Tauschek says the project is very similar to a Defense Advanced Research Projects Agency project aimed at creating delay/disruption tolerant networks. "It is quite brilliant from both a reliability and security perspective, and will likely evolve into the wireless mainstream over time as the technology is refined," Tauschek says.


The Skeleton of the European Virtual Human Is 'Made in Italy'
Biomed Town (09/29/08) Bandieri, Annalisa

The international consortium VPHOP will spend the next four years developing next-generation medical technology that will enable people with osteoporosis to receive 100 percent personalized health care. The medical technology will make use of patient-specific computer models for diagnosing and treating osteoporosis. Health care providers will be able to use the patient-specific computer models to assess the status of the disease, how it has evolved over time, and the best treatment for a particular patient. The European Commission is funding the project as part of the Virtual Physiological Human research initiative that was launched last year. The Rizzoli Orthopedic Institute in Bologna, Italy, is coordinating the efforts of the VPHOP consortium, which brings together 60 experts in informatics, bioengineering, medical physics, and medical research from 19 public and private organizations. "The aim of the project is to create, with all these technologies, an integrated solution to be used in the clinical practice and through which it will be possible to gather all available patient information into a unifying computer model capable of predicting on one side the fracture risk with an excellent accuracy, on the other one the effects that the various treatment options will have on that particular patient," says VPHOP consortium coordinator Marco Viceconti.


The A-Z of Programming Languages: C#
Computerworld Australia (10/01/08) Hamilton, Naomi

Anders Hejlsberg, Microsoft's leader of C# development, says in an interview that the primary motivation for the development of C#'s precursor, Common Language Runtime, was the desire to construct a unified and modern development platform for multiple programming languages and application models. He says the development of C# was motivated by a need "to create a first-class modern language on this platform that would appeal to the curly braces crowd: the C++ programmers of the world at the time, and competitively, the Java programmers." Hejlsberg attributes the popularity of the C language base to a number of factors, including its succinctness and its appropriateness for its time, in that it provided operating system builders higher level abstractions such as data types, yet supported the writing of efficient code thanks to its closeness to the machine. Hejlsberg believes the time has finally arrived for functional programming to penetrate the mainstream, "and F# [a fusion of C# and a functional language] is unique in being the first industrial-strength functional programming language with an industrial-strength tooling language behind it, and an industrial-strength platform underneath it." He also describes himself as a strong proponent of C# standardization, and cites three industry trends that are inspiring C# developers: A migration toward more declarative styles of programming, the revival of dynamic programming, and the search for better programming models for concurrency. Hejlsberg says he is particularly proud of C#'s facilitation of a significant increase in the productivity of developers on the Windows platform.


Models Ready for Their Close-Ups
Government Computer News (09/29/08) Vol. 27, No. 24, Jackson, William

A $1.4 million National Science Foundation (NSF) grant will support a team of climate research centers working to develop the next generation of climate computer models, which will use powerful new supercomputers to track and predict climate change. NSF's Jay Fein says the limiting factor to more reliable climate models at a high resolution is the computational capability to implement those ideas, but supercomputers are starting to obtain the capacity for that type of modeling. In anticipation of petaflop-scale computing becoming more widely available, researchers from several universities and atmospheric research centers will share the NSF grant to create new climate models. The National Center for Atmospheric Research (NCAR) has developed one of the most sophisticated, fully coupled atmospheric-ocean models, called the Community Climate System Model, an open source model that is used by the National Oceanic and Atmospheric Administration and available to any researcher. NCAR says that producing a picture of a single day's worth of the world's climate requires performing 700 billion calculations, but even that only produces a peak resolution of 1 degree by 1 degree, or an area of about 3,900 square miles. Elevating climate modeling to the next level to include elements such as eddies in ocean currents, which are narrower than 100 miles, will require enormous improvements in resolution. Other important elements of climate modeling, such as measuring carbon dioxide levels and subtle feedback loops, will require even more processing power.


Microsoft, Xerox Invest in Innovation
eWeek (09/29/08) Taft, Darryl K.

Microsoft and Xerox were among the companies that described their research initiatives at last week's Massachusetts Institute of Technology's Emerging Technologies conference. Microsoft's Craig Mundie said that life-changing technologies take a long time to develop and noted that the company has invested heavily in its research division, including a new research facility in Massachusetts. The Microsoft Research New England lab will initially focus on the combination of core computer sciences, particularly as it relates to new algorithms, and the social sciences, such as economics, psychology, and sociology. Microsoft believes that combining computer and social sciences will unite form and functionality in the context of how people use, or want to use, technology, with the goal of developing the technological experiences of the future. Mundie said he focuses on technology that is likely to have an impact in the three- to 20-year range, such as parallel computing platforms, live platforms, cloud computing, modeling, and robotics. Meanwhile, Xerox Innovation Group president Sophie Vandebroek said Xerox researchers are focused on helping customers deal with the information explosion. Vandebroek said the annual cost to companies is about $650 billion a year in lost productivity based on information overload. Because so much information is created in an unstructured format, Xerox is trying to make documents smarter by leveraging natural language technology and adding intelligence and structure to documents.


Vint Cerf: Keeping the Internet Healthy
CIO Insight (09/25/08) Cone, Edward

Google chief Internet evangelist Vinton G. Cerf says in an interview that the highly distributed nature of the economy and government calls for a distributed approach to a national technology policy. He sees great value in "the reconstitution of bodies providing technical input to policy makers" on national, state, and perhaps even local levels. Cerf says network neutrality is a more complicated issue than it may seem, and that Internet access providers should not leverage their access position to interfere with people offering applications that compete with applications supplied by the underlying transport and access provider. The need to access data in obsolete formats will grow over time, and Cerf sounds a warning about the loss of such data to "bit rot" and calls for serious consideration of what must be done to guarantee that important information remains accessible over time. The new IPv6 address space is essential to the continued growth of the Internet, and Cerf says the business sector must brace itself for the coexistence of IPv6 and IPv4. Cerf finds it troubling that the United States is lagging behind other countries and Internet providers in terms of delivery capacities, which has hurt the effectiveness of broadband services. He says he no longer thinks that the problem can be addressed by intermodal competition, and argues that "we need other kinds of mechanisms for maintaining fair access to these resources, especially for value-added providers." Looking ahead several decades, Cerf envisions significantly more available connectivity via cloud computing, while interaction with online systems will advance to the point where engagement through gestures, conversation, and haptics will be routine.


Web Science: Studying the Internet to Protect Our Future
Scientific American (10/08) Vol. 299, No. 4, P. 76; Shadbolt, Nigel; Berners-Lee, Tim

The discipline of Web science is focused on understanding the emergent properties of the Internet, how these properties might be tapped, what new trends might be on the horizon, or their potential ramifications for the human race, write researchers Nigel Shadbolt and Sir Tim Berners-Lee, who helped launch the Web Science Research Initiative. They say Web science will employ mathematics, physics, computer science, psychology, ecology, sociology, law, political science, economics, and other scientific fields to "model the Web's structure, articulate the architectural principles that have fueled its phenomenal growth, and discover how online human interactions are driven by and can change social conventions." Web science serves the dual functions of understanding and engineering the Web, and among the insights drawn from Web science precursors is the realization that the relevance of a Web page is best understood in terms of the number and importance of the pages linking to it, and the connectivity of the Web following a power-law degree distribution. Shadbolt and Berners-Lee say a major focus of Web science will be the investigation into how a small technical advancement can spawn a large social phenomenon, such as the blogosphere. The advent of the Semantic Web is an emerging phenomenon that is benefiting from concerted research, and the Semantic Web's promised advantages include much more fine-grained answers to questions. "We do not fully know what Web science is, so part of the new discipline should be to find the most powerful concepts that will help the science itself grow," note Shadbolt and Berners-Lee. Securing privacy and conveying trust are among the key issues that could be addressed with further Web science research, the authors write.


Abstract News © Copyright 2008 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Unsubscribe
Change your Email Address for TechNews (log into myACM)