Association for Computing Machinery
Welcome to the November 16, 2009 edition of ACM TechNews, providing timely information for IT professionals three times a week.

HEADLINES AT A GLANCE


Supercomputers With 100 Million Cores Coming By 2018
Computerworld (11/16/09) Thibodeau, Patrick

A key topic at this week's SC09 supercomputing conference, which takes place Nov. 14-20 in Portland, Ore., is how to reach the exascale plateau in supercomputing performance. "There are serious exascale-class problems that just cannot be solved in any reasonable amount of time with the computers that we have today," says Oak Ridge Leadership Computing Facility project director Buddy Bland. Today's supercomputers are still well short of exascale performance. The world's fastest system, Oak Ridge National Laboratory's Jaguar, reaches a peak performance of 2.3 petaflops. Bland says the U.S. Department of Energy (DOE) is holding workshops on building a system 1,000 times more powerful. The DOE, which is responsible for funding many of the world's fastest systems, wants two machines to reach approximately 10 petaflops by 2011 to 2013, says Bland. However, the next major milestone currently receiving the most attention is the exaflop, or a million trillion calculations per second. Exaflop computing is expected to be achieved around 2018, according to predictions largely based on Moore's Law. However, problems involved in reaching exaflop computing are far more complicated than advancements in chips. For example, Jaguar uses 7 megawatts of power, but an exascale system that uses CPU processing cores alone could take 2 gigawatts, says IBM's Dave Turek. "That's roughly the size of medium-sized nuclear power plant," he says. "That's an untenable proposition for the future." Finding a way to reduce power consumption is key to developing an exascale computer. Turek says future systems also will have to use less memory per core and will require greater memory bandwidth.


Disease-Matching Software Could Save Children
ICT Results (11/13/09)

Software developed by the Heath-e-Child project is capable of comparing a variety of structured and unstructured data to help identify rare or life-threatening diseases in children and then model the potential progression of those diseases. The software can search and compare patient data from hospitals throughout Europe, allowing doctors to study how patients with similar data at other hospitals were treated and whether treatment was successful. The Health-e-Child system links anonymized databases of patient information from hospitals in Paris, Genoa, Rome, and London, and there are plans to extend the network to 25 hospitals. For unstructured data such as images, the project has developed tools that translate visual information into a machine-readable language. The project's three-dimensional (3D) registration tool for magnetic resonance imaging (MRI) scans and its MRI erosion-scoring system for juvenile idiomatic arthritis have been recognized as major advances. The project's CaseReasoner tool enables doctors to search thousands of disease diagnoses, treatments, and outcomes to find similar cases. And the CardioWiz tool can be combined with MRI scan measurements to rapidly generate animated 3D models of a patient's heart that can be used to simulate the effects of heart surgery or drug treatments.


ECS Researchers Present Learning Technologies in USA
University of Southampton (ECS) (11/11/09) Lewis, Joyce

Researchers from the University of Southampton's School of Electronics and Computer Science (ECS) presented the most recent developments in the school's Learning Societies Lab at a recent symposium at the IBM Thomas J. Watson Research Center. ECS' Mike Wald discussed new features for the lab's Web-based Synote program, including the ability to synchronize live notes taken via Twitter with synchronized lecture recordings and transcripts created through IBM's speech-recognition software. E.A. Draffan, also from the ECS Learning Societies Lab, gave a presentation on how people with disabilities will access Web 2.0 technologies as technology continues to evolve. Draffan's lecture focused on the need to enhance the knowledge of a wider network of informal experts and academic staff to enable them to introduce disabled students to the many Web-based tools that are currently being developed. He says doing so would enable disabled students to further develop their skills and potentially become informal experts capable of sharing the strategies they have developed. "In the past, people used their assistive technologies mainly with desktop computer applications, now they are spending far more time online," Draffan says. "They also are collaborating and communicating via social networks, blogs, and wikis, which are not always accessible; however, often with the support of friends and tutors, they find workarounds and go on to build their own strategies."


Contact Lenses to Get Built-In Virtual Graphics
New Scientist (11/12/09) Venkatraman, Vijaysree

University of Washington researcher Babak Parviz has embedded nanoscale-sized circuitry into a contact lens in an effort to create a new kind of heads-up-display (HUD). The lens harvests radio waves to power a light-emitting diode (LED), which would be used to project floating images in front of a user's eyes. Parviz says that one of the limitations of current HUDs is their limited field of view, but a contact lens could provide a much wider field of view. The circuitry for the contact lens requires 330 microwatts, but does not need a battery. Instead, a loop antenna receives power from a nearby radio source. Parviz says future version of the contact lens could harvest power from a user's cell phone, potentially as the phone sends information to the lens. Advanced lenses also will have more pixels and an array of microlenses to focus the image so it appears suspended in front of a user's eyes. He says the lens could be used to view subtitles when someone is speaking a foreign language, directions for an unfamiliar area, captioned photographs, or information for pilots. "A contact lens that allows virtual graphics to be seamlessly overlaid on the real world could provide a compelling augmented reality experience," says Human Interface Technology Laboratory director Mark Billinghurst.


Working Together to Design Robust Silicon Chips
EUREKA (11/10/09) Garcin, Philippe

Cooperation between researchers, chipmakers, and tool suppliers working on the EUREKA MEDEA+ microelectronics Cluster ROBIN project has led to improved design methods for silicon chips. The researchers say the project has created a process that solves problems far earlier in the design phase, enhancing integrated circuit design techniques. Project partners formalized the problems; specified software tools, models, and design flows with strong interoperability; and proposed test cases. The project focused on optimizing the design method for both existing 130 nm and 90 nm and future 65 nm and 45 nm technologies by defining the most efficient trade-offs between circuit robustness, in regards to yield and reliability, and the efficient use of technology affecting performance, specifically density and power consumption. For example, on inter-block couplings, the project enabled a decrease of simulation time by a factor of four in critical radio-frequency circuits. The researchers say the benefits of the project have been demonstrated in automotive, telecommunications, and multimedia applications.


Intel Says Shape-Shifting Robots Closer to Reality
Computerworld (11/12/09) Gaudin, Sharon

Researchers at Intel and Carnegie Mellon University (CMU) say distributed computing and robotics could be used to make shape-shifting electronics a reality in the not-too-distant future. The researchers are working to take millions of millimeter-sized robots and enable them to use software and electromagnetic forces to change into a variety of shapes and sizes. CMU professor Seth Goldstein and Intel researcher Jason Campbell recently reported that they are able to demonstrate the physics needed to create programmable matter. "It's been pretty hard but we've made a lot of progress," Campbell says. "Optimistically, we could see this in three to five years." Programmable matter is called claytronics, and the millimeter-sized robots are called catoms. Each catom would contain its own processor, and would essentially be a tiny robot or computer with computational power, memory, and the ability to store and share power. The goal is to program millions of catoms to work together by developing software that focuses on a pattern or overall movement of the entire system of tiny robots. Each robot will be smart enough to detect its own place in the pattern and respond accordingly. Part of the research effort involves developing new programming languages, algorithms, and debugging tools to get these systems to work together.


Stanford-Led Research Helps Overcome Barrier for Organic Electronics
Stanford University (11/10/09) Orenstein, David

Stanford University researchers have determined why some transistors made of organic crystals do not allow electricity to flow through them as easily as other electronics, a discovery that will help make organic electronics better. The researchers have shown that how boundaries between individual crystals are aligned can make a 70-fold difference in how easily electrical charges can pass through transistors. Although organic semiconductors could greatly benefit the electronics industry, performance from transistor to transistor is far more inconsistent than in silicon-based chips. The researchers found that the "grain" boundaries between crystals can cause the path that electrical charges must flow through to be extremely inefficient, often zigzagging back and forth. To test the importance of the boundary alignment, the researchers grew crystals of an organic semiconductor using a process that ensured consistent alignment from crystal to crystal in a uniform direction. The researchers then made transistors in which charges could flow through molecules that were well aligned, and others in which the molecules were misaligned, and found that the well-aligned transistors performed far better. "By better understanding what goes on at these boundaries, and how detrimental they are, improvements can be made at the chemistry end as well as at the design and fabrication end of the process," says Stanford graduate student Jonathan Rivnay. "This way devices can be more reproducible and better performing."


How Secure Is Cloud Computing?
Technology Review (11/16/09) Talbot, David

The recent ACM Cloud Computing Security Workshop, which took place Nov. 13 in Chicago, was the first event devoted specifically to the security of cloud computing systems. Speaker Whitfield Diffie, a visiting professor at Royal Holloway, University of London, says that although cryptography solutions for cloud computing are still far-off, much can be done in the short term to help make cloud computing more secure. "The effect of the growing dependence on cloud computing is similar to that of our dependence on public transportation, particularly air transportation, which forces us to trust organizations over which we have no control, limits what we can transport, and subjects us to rules and schedules that wouldn't apply if we were flying our own planes," Diffie says. "On the other hand, it is so much more economical that we don't realistically have any alternative." He says current cloud computing techniques negate any economic benefit that would be gained by outsourcing computing tasks. Diffie says a practical near-term solution will require an overall improvement in computer security, including cloud computing providers choosing more secure operating systems and maintaining a careful configuration on the systems. Security-conscious computing services providers would have to provision each user with their own processors, caches, and memory at any given moment, and would clean systems between users, including reloading the operating system and zeroing all memory.


What Computer Science Can Teach Economics
MIT News (11/09/09) Hardesty, Larry

Professor Constantinos Daskalakis in the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory is applying the theory of computational complexity to game theory. He argues that some common game-theoretical problems are so challenging that solving them would take the lifetime of the universe, and thus they fail to accurately represent what occurs in the real world. In game theory a "game" represents any mathematical model that associates different player strategies with different results. Daskalakis' doctoral thesis disputes the assumption that finding the Nash equilibrium for every game will allow the system's behavior to be accurately modeled. In the case of economics, the system being modeled is the market. Daskalakis' thesis illustrates that for some games, the Nash equilibrium is so difficult to calculate that all the world's computing resources could never find it in the universe's lifetime. In the real world, market rivals tend to calculate the strategies that will maximize their own outcomes given the current state of play, rather than work out the Nash equilibria for their specific games and then adopt the resulting tactics. However, if one player changes strategies, the other players will change strategies in response, driving the first player to shift strategies again, and so on until the feedback pathways eventually converge toward equilibrium. Daskalakis contends that feedback will not find the equilibrium faster than computers could calculate it.


Tough Choices for Supercomputing's Legacy Apps
ZDNet UK (11/12/09) Jones, Andrew

The future of supercomputing holds several significant software challenges, writes Numerical Algorithms Group's Andrew Jones. The first challenge is the rapidly increasing degree of concurrency required. A complex hierarchy of parallelism, from vector-like parallelism at the local level through multithreading to multi-level, and massive parallel processing across numerous nodes, also present unique challenges, Jones says. Additionally, supercomputing will have to handle a new wave of verification, validation, and resilience issues. Although petaflop and exascale computing holds much promise, Jones says experts question whether some current applications will still be usable. Experts argue that some legacy applications are coded in certain ways that make evolution impossible, and that code refactoring and algorithm development would be more difficult than starting from scratch. However, Jones notes that disposing of old code also throws away extremely valuable scientific knowledge. Ultimately, he says that two classes of applications may emerge--programs that will never be able to exploit future high-end supercomputers but are still used while their successors develop comparable scientific maturity, and programs that can operate in the exascale and petascale arena. Jones says that developing and sustaining these two fields will require a well-balanced approach among researchers, developers, and funding agencies, who will have to continue to provide investments in scaling, optimization, algorithm evolution, and scientific advancements in existing code while diverting sufficient resources to the development of new code.


New 'finFETs' Promising for Smaller Transistors, More Powerful Chips
Purdue University News (11/10/09) Venere, Emil

Purdue University researchers are developing new transistors based on a fin-like structure that could enable engineers to develop faster and more compact circuits. Unlike conventional, flat, silicon-based transistors, the fins, which are called finFETs, for fin field-effect-transistors, are made from indium-gallium-arsenide. The Purdue researchers say they are the first to create finFETs using atomic-layer deposition--a common production process--which could make the new finFET technique a practical alternative to silicon transistors. FinFETs could allow engineers to bypass silicon's limitations in keeping pace with Moore's Law, as shrinking electronic devices made from conventional silicon-based semiconductors is becoming increasingly challenging. However, in addition to making smaller transistors, finFETs also can conduct electrons at least five times faster than conventional silicon transistors. "The potential increase in speed is very important," says Purdue professor Peide Ye. "The finFETs could enable industry to not only create smaller devices, but also much faster computer processors." The fin-like design is crucial to preventing current leakage, partially because the vertical structure can be completely surrounded by an insulator, while a flat device can only have an insulator on one side.


Jaguar Supercomputer Races Past Roadrunner in Top500
CNet (11/15/09) Ogg, Erica

The Jaguar Cray XT5 supercomputer can process 1.75 petaflops and is now the fastest computer in the world. The Jaguar supercomputer switches places with IBM's Roadrunner, which saw its processing speed decline to 1.04 petaflops from 1.105 petaflops, apparently due to a repartitioning of the system, and is now in second place on the Top500 list of supercomputers. The list, which is compiled two times a year, will be unveiled Tuesday at the SC09 conference in Portland, Ore. The Kraken, another Cray XT5 system, has risen to third from fifth place by posting a processing performance speed of 832 teraflops, while IBM's BlueGene/P, at Forschungszentrum Juelich in Germany, is No. 4 with 825.5 teraflops. The Tianhe-1, at No. 5, marks the highest ranking ever for a Chinese supercomputer. Sandia National Laboratories' Red Sky, a Sun Blade system with a LINPACK performance of 423 teraflops, is the newcomer to the top 10. Hewlett-Packard accounted for 210 of the 500 fastest supercomputers, followed by IBM with 185. Eighty percent used Intel processors, and 90 percent used the Linux operating system.


It's All Semantics: Searching for an Intuitive Internet That Knows What Is Said--and Meant
Scientific American (11/09/09) Greenemeier, Larry

The push to develop the Semantic Web recently received fresh support through a National Science Foundation grant, which has been awarded to researchers at Rensselaer Polytechnic Institute. The $1.1 million grant will support the creation of a software programming tool kit by mid-2010 that will allow scientists and researchers to make data from their work available to a larger audience. A Semantic Web would enable researchers to present their searches in a more natural way. A semantic interface would allow a researcher to visit a single research site, describe the desired information, and allow ontologies and semantics to find not only that information, but any relevant related information the research may have overlooked. "The Semantic Web has its own query language that takes advantage of meanings of concepts and their relationships," says Tom Narock, a faculty research assistant at NASA's Goddard Earth Sciences and Technology Center and the University of Maryland, Baltimore County. "You ask your question at very high level, and it takes care of filling in the details for you."


Abstract News © Copyright 2009 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]




Unsubscribe
Change your Email Address for TechNews (log into myACM)