Association for Computing Machinery
Welcome to the July 31, 2009 edition of ACM TechNews, providing timely information for IT professionals three times a week.


Seeking Efficiency, Scientists Run Visualizations Directly on Supercomputers (07/30/09) Lerner, Louise

Researchers at the U.S. Department of Energy's (DOE's) Argonne National Laboratory are using a method called software-based parallel volume rendering to accelerate the generation of quadrillions of data points for visualizations. The work is sponsored by DOE's Office of Advanced Scientific Computing Research. Scientists first split up the data among numerous processing cores so that they can all work concurrently, and then the data is transferred to a series of graphical processing units (GPUs) that produce the final images. "It's so much data that we can't easily ask all of the questions that we want to ask: Each new answer creates new questions and it just takes too much time to move the data from one calculation to the next," says the Argonne Leadership Computing Facility's Mark Hereld. "That drives us to look for better and more efficient ways to organize our computational work." The researchers sought to determine if they could augment performance by forgoing transfer to the GPUs and instead execute the visualizations directly on the supercomputer. They tested the method on a set of astrophysics data and learned that they could speed up operational efficiency. "We were able to scale up to large problem sizes of over 80 billion voxels per time step and generated images up to 16 megapixels," says Tom Peterka at Argonne's Mathematics and Computer Science Division. Argonne researchers can explore physical, chemical, and biological phenomena with much greater spatial and temporal detail, because the Blue Gene/P supercomputer's main processor is capable of visualizing data as it is analyzed.

Software Development: Speeding From Sketchpad to Smooth Code
ICT Results (07/30/09)

The VIsualize all moDel drivEn (VIDE) programming project is a European research effort to make developing error-free software easier and less time consuming. The VIDE project has developed a software design and development toolkit designed to make developing well-functioning, easily modified software easier, faster, and less expensive. VIDE's approach is largely based on Model Driven Architecture, a programming method developed by the Object Management Group. The idea is that each stage of software development needs its own formal model, and the VIDE team realized that creating and linking those models in a methodical manner could allow for the automation of many steps in software development. A software developer could start by working with a client to determine what the program needs to do. The inputs, outputs, and procedures would be formalized through a computation independent model, which lays out what the program will do rather than how it will do it. "Models are usually considered just documents," says VIDE technical coordinator Piotr Habela. "Our goal was to make the models serve as production tools." With VIDE, much of the modeling is visual, in the form of flowcharts and diagrams that are intuitive enough for the client or domain expert to understand, but also formalized enough to serve as inputs in the software development process.

U.S. Government Soliciting Broadband Experts
ACM (07/31/09)

The Department of Commerce's National Telecommunications and Information Administration (NTIA) is seeking industry experts to evaluate grant proposals for its $4.7 billion Broadband Technology Opportunities Program. NTIA is looking for proposals directed at expanding broadband connectivity to unserved and underserved communities across the United States. Such projects should also serve to drive the demand for broadband, increase public computing center capacity, and enhance broadband education. Proposals must indicate lasting value to commerce, economic development, education, research, health care, and energy conservation. NTIA is looking for expert reviewers with a background in broadband-related activities, including engineering, business development, economics, research and development, and project management. Extensive experience in analysis and oversight of infrastructure projects is most welcome. Participants are asked to make a minimum time commitment of 20 hours between mid August and mid September. For more information about reviewer qualifications, visit To serve as a reviewer, apply directly at http://[email protected]

A Portrait of STEM Majors
Inside Higher Ed (07/30/09) Lederman, Doug

A new report from the U.S. Education Department delves into how young students fare when they pursue science and technology degrees in college. The report, "Students Who Study Science, Technology, Engineering, and Mathematics (STEM) in Postsecondary Education," found that 54.9 percent of students who entered college in 1995-1996 and majored in a STEM field for some time through 2001 earned a degree or certificate, while only 50.6 percent of students who did not choose a science or technology major earned a degree or certificate. The physical sciences had a rate of 68.4 percent, followed by natural sciences at 63.5 percent and mathematics at 61.4 percent. Of the STEM fields, computer or information sciences had the lowest rate at 46.4 percent. And less than half of the students who chose a science or technology major earned a degree in STEM. The report found that 40.7 percent received a degree or certificate in STEM, another 12 percent were still enrolled in one of those fields, but 20.6 percent had switched from STEM-related majors and 26.7 percent were no longer in college.

A Better Way to Shoot Down Spam
Technology Review (07/29/09) Kremen, Rachel

The Spatio-temporal Network-level Automatic Reputation Engine (SNARE) is an automated system developed at the Georgia Institute for Technology that can spot spam before it hits the mail server. SNARE scores each incoming email according to new criteria that can be gathered from a single data packet. The researchers say the system puts less pressure on the network and keeps the need for human intervention to a minimum while maintaining the same accuracy as conventional spam filters. Analysis of 25 million emails enabled the Georgia Tech team to compile characteristics that could be culled from a single packet of data and used to efficiently identify spam. They also learned that they could identify junk email by mapping out the geodesic distance between the Internet Protocol (IP) addresses of the sender and receiver, as spam tends to travel farther than legitimate email. The researchers also studied the autonomous server number affiliated with an email. SNARE can spot spam in seven out of 10 instances, with a false positive rate of 0.3 percent. If SNARE is deployed in a corporate environment, the network administrator could establish rules about the disposition of email according to its SNARE score. Northwestern University Ph.D. candidate Dean Malmgren questions the effectiveness of SNARE once its methodology is widely known, as spammers could use a bogus IP address close to the recipient's to fool the system. Likewise, John Levine of the Coalition Against Unsolicited Commercial Email warns that "spammers are not dumb; any time you have a popular scheme [for identifying spam], they'll circumvent it."

Talking Paperclip Inspires Less Irksome Virtual Assistant
New Scientist (07/29/09) Campbell, MacGregor

The U.S. Defense Advanced Research Projects Agency (DARPA) has spent an estimated $150 million developing an artificially intelligent (AI) virtual assistant. DARPA's virtual helper was created to ease the U.S. military's bureaucratic load, but the technology behind that project could soon be used by civilians as well. Launched in 2003, the Cognitive Assistant that Learns and Organizes (CALO) project involved more than 60 university and research organizations in the largest ever, non-classified AI project. The project, which ends on July 31, has produced a virtual assistant capable of sorting, prioritizing, and summarizing emails, automatically scheduling meetings, and preparing briefing notes for meetings. Raymond Perrault of SRI International, an independent research organization in charge of the project, says the biggest priority was making CALO capable of "learning in the wild," and making it a genuinely useful assistant. Most software capable of learning requires many examples, but CALO needed to be quicker to be useful so developers have built in technology such as "transfer learning," which applies lessons from one domain to another. A spin-off of the project, an app called Siri, will become available on the iPhone later this year to help with mundane tasks such as checking online reviews to find a good restaurant and make a reservation. Users will be able to give Siri verbal instructions and the app will use various Web services to fulfill the request.

Game Utilizes Human Intuition to Help Computers Solve Complex Problems
University of Michigan News Service (07/27/09) Moore, Nicole Casal

University of Michigan researchers have developed FunSAT, an online logic puzzle game that could be used to solve fundamental problems in computer hardware design. FunSAT could help integrated circuit designers choose and arrange transistors and connections on silicon microchips, as well as solve other hardware design problems. The game is designed to harness humans' abilities to strategize, visualize, and understand complex systems. "Humans are good at playing games and they enjoy dedicating time to it," says Michigan professor Valeria Bertacco. "We hope that we can use their strengths to improve chip designs, databases, and even robotics." By solving problems on the FunSAT board, players contribute to the design of complex computer systems. The game consists of rows and columns of green, red, and gray bubbles of various sizes, with buttons around the perimeter that can turn yellow or blue with a mouse click. The buttons' color determines the color of the bubbles. The objective is to use the buttons to turn all the bubbles green. The game solves satisfiability problems, which are highly complex mathematical questions that involve selecting the best arrangement of options. In the game, the bubbles represent constraints, and become green when those constraints have been satisfied. Once a puzzle has been solved, a computer scientist could look at the color of each button to find the solution to that particular problem.

How Wolfram Alpha Could Change Software
InfoWorld (07/30/09) McAllister, Neil

Wolfram Research's Wolfram Alpha software is described as a computational knowledge engine that employs mathematical methods to cross-reference various specialized databases and generate unique results for each query. Furthermore, Wolfram alleges that each page of results returned by the Wolfram Alpha engine is a unique, copyrightable work because its terms of use state that "in many cases the data you are shown never existed before in exactly that way until you asked for it." Works produced by machines are copyrightable, at least in theory. But for Wolfram Alpha to claim copyright protection for its query results, its pages must be such original presentations of information that they are eligible as novel works of authorship. Although Wolfram says its knowledge engine is driven by exclusive, proprietary sources of curated data, many of the data points it works with are actually commonplace facts. If copyright applies to Wolfram Alpha's output in certain instances, then by extension the same rules are relevant to every other information service in similar cases. Assuming that unique presentations based on software-based manipulation of mundane data can be copyrighted, the question remains as to who retains what rights to the resulting works.

Scale-Free Networks: A Decade and Beyond
Science (07/24/09) Vol. 325, No. 5939, P. 412; Barabasi, Albert-Laszlo

Early network models were predicated on the notion that complex systems were randomly interconnected, but research has shown that networks exhibit perceptibly nonrandom features through what is termed the scale-free property. Moreover, this network architecture has demonstrated universality through its manifestation in all kinds of real networks, regardless of their age, scope, and function. All systems seen as complex are comprised of an incredibly large number of elements that interact through intricate networks. The scale-free nature of networks has been proven thanks to improved maps and data sets as well as correspondence between empirical data and analytical models that predict network structure. The existence of the scale-free property validates the indivisibility of the structure and evolution of networks, and has forced researchers to accept the concept that networks are in a constant state of flux due to the arrival of nodes and links. The universality of diverse topological traits, such as motifs, degree distributions, degree correlations, and communities, function as a platform to analyze variegated phenomena and make predictions. By understanding the behavior of systems perceived as complex, researchers can plot out such things as the Internet's response to attacks and cells' reactions to environmental changes. This requires achieving a comprehension of networks' dynamical phenomena.
View Full Article - May Require Free Registration | Return to Headlines

ICANN President Debunks Internet Economics (07/29/09) Thomson, Iain

New ICANN president Rod Beckstrom has proposed an economic model for valuing computer networks. In his speech at the recent Black Hat USA 2009 conference in Las Vegas, Beckstrom maintained that Metcalfe's Law, which posits that the value of a network is equal to the number of network nodes squared, is fundamentally flawed. As an alternative, Beckstrom suggested establishing the value of a network based on how many transactions occur. "The value of the network equals the net value added to each user's transactions, summed up for all users," he said. Beckstrom developed this model with Google's Vint Cerf. If adopted, the model would change the way many companies view their value, as well as help them to quantify the value of the work done by network administrators and security officers. Beckstrom argued that using this model, chief security officers can see that security is not an investment but a cost to reduce loss. He argued that increasing Internet security with better technology and law enforcement, harsher legal penalties, and reducing the total value that can be gained from a specific target, discourages hacking by decreasing its profit margin.

Louisiana Tech to Get $2.85 Million Cyber Grant
Shreveport Times (LA) (07/18/09)

The U.S. Air Force's Office of Scientific Research has awarded a $2.85 million grant to Louisiana Tech University to create the Cyberspace Research Lab. Advanced research and development functions, such as virtualization, visualization, high-performance computing, wireless sensor networks, and micro unmanned aerial vehicles, will be the primary objective for the lab. The new facilities will enable researchers to configure different environments, simulate and test real-life events in which security breaches could occur, and develop counter-strategies for security attacks. Louisiana Tech's Les Guice says the Cyberspace Research Lab will advance collaborative efforts with other cyberresearch centers around the world, and provide opportunities to develop and test new algorithms, software, equipment, and systems in realistic scenarios. "As evidenced by recent attacks [on] government computers, cyberthreats are more and more prolific, demonstrating a critical need for further R&D," Guice says. "We intend to play a major role in addressing these needs." He says the lab also will support Louisiana Tech's active research programs with the U.S. Department of Defense, other agencies, and private industry.

Haptics: The Feel-Good Technology of the Year
Computerworld (07/24/09) Elgan, Mike

Touchscreens are already being implemented in cell phones and are expected to sweep through mobile and desktop computing, but a pair of Silicon Valley companies, Immersion and Apple, are planning to create haptic systems to artificially reproduce the feel of keyboards and other on-screen objects on touch devices. Earlier in July Apple filed a U.S. patent application describing "systems, methods, computer-readable media, and other means for utilizing touch-based input components that provide localized haptic feedback to a user." The company's localized haptic feedback technology is designed to represent the boundaries of objects on screen as users run their fingers over them, transforming the touchscreen from a flat, disengaging device into something more intuitive. Two years ago Apple filed a patent application for "keystroke tactility arrangement on a smooth touch surface," which defines physical bumps and depressions in a screen that can be activated and deactivated according to what is displayed on screen. Immersion's haptics products are used by surgeons to provide tactile feedback for advanced surgical procedures performed by robots. Chief technology officer Christophe Ramstein demonstrated the company's haptics technology at Fortune's Brainstorm Tech conference. The next-generation systems are designed to facilitate thousands of different sensations that users will immediately recognize, and provide cues about onscreen action. High-fidelity haptics can help forge an emotional bond between people communicating electronically as well as between humans and machines.

Abstract News © Copyright 2009 INFORMATION, INC.
Powered by Information, Inc.

To submit feedback about ACM TechNews, contact: [email protected]

Change your Email Address for TechNews (log into myACM)