Association for Computing Machinery
Welcome to the November 22, 2013 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Heterogeneous Systems Dominate the Green500
HPC Wire (11/20/13)

This year's Green500 list announced at this week's SC13 Conference in Denver includes a prevalence of heterogeneous supercomputing systems in the top 10 spots, representing a continuation of last year's trend. The list is topped by the Tokyo Institute of Technology's TSUBAME-KFC, with an efficiency of 4.5 gigaflops/watt. All of TSUBAME-KFC's computational nodes are composed of two Intel Ivy Bridge processors and four NVIDIA Kepler graphical processing units (GPUs). The integration of Intel central processing units (CPUs) with NVIDIA GPUs is a model followed by all the systems in the top 10. Among the milestones for this year's Green500 is the first time a supercomputer has exceeded the 4 gigaflops/watt limit, the first time all top 10 systems are heterogeneous, and the first time the average of the measured power consumed by the systems on the Green500 declined with respect to the previous list. "A decrease in the average measured power coupled with an overall increase in performance is an encouraging step along the trail to exascale," says Virginia Tech professor Wu Feng. In addition, the latest Green500 marks the first time an extrapolation to an exaflop supercomputer has fallen below 300 MW. The Green500 has embraced new methodologies for quantifying supercomputing system power and providing a more accurate representation of the large-scale systems' energy efficiency.


Most MOOC Users Well-Educated, Study Finds
The Wall Street Journal (11/20/13) Geoffrey Fowler

Massive open online courses (MOOCs) must overcome some barriers to achieve the goal of democratizing education, according to a University of Pennsylvania study published Wednesday. The survey included almost 35,000 students in more than 200 countries and territories who participated in the university's 32 MOOCs provided by Coursera. Most MOOC students are well-educated young men trying to gain new career skills, the survey found. More than 80 percent of the U.S.-based MOOC students already held college degrees, while among the general U.S. population only about 30 percent have degrees. However, in Brazil, Russia, India, China, and South Africa, the "educational disparity is particularly stark," with nearly 80 percent of MOOC students belonging to the richest 6 percent of the population. Men represented 56.9 percent of total MOOC students in the survey, but 64 percent in nations outside the Organization for Economic Cooperation and Development. Major MOOC providers say they are still determining how to improve educational outcomes, especially for more disadvantaged students. Coursera co-founder Andrew Ng in September noted that developing countries contribute 40 percent of the company's students, and he said Coursera is just starting to understand the impact of issues such as a lack of broadband access on educational outcomes.
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 


SC13: Experts Debate Thorny Exascale Memory Issues
Network World (11/20/13) Jon Gold

Although addressing the problems posed by modern memory technology are essential to realizing exascale supercomputing performance, an expert panel at this week's SC13 Conference in Denver could not reach consensus on how to go about solving the issues. Intel fellow Shekhar Borkar says the use of the correct memory technology will be key to the exascale breakthrough, and argues that only DRAM and NAND have sufficient maturity. Meanwhile, Micron's Troy Manning cites the growing complexity of memory fabrication as an additional complication, noting that the sale price of memory components has not risen proportionately with the cost of cutting-edge manufacturing facilities. Opinion is divided on more closely combining memory with computer hardware, with ARM's Andreas Hansson arguing for a holistic system design strategy that integrates memory, interconnect, and compute. Notre Dame University professor Peter Kogge and IBM researcher Doug Joseph agree, pointing to hybrid stacked DRAM and non-volatile memories as a viable integration. Borkar, however, disagrees, noting the problems of producing a generic integration method between the two elements. Meanwhile, NVIDIA scientist Bill Dally suggests a grid-based array of small individual memory units on one big chip to facilitate highly efficient communication.


New Software to Allow More and Larger Images on Wikipedia
University of Southampton (United Kingdom) (11/20/13)

A pair of British researchers has developed new image-processing software that will dramatically increase Wikipedia's ability to host more and larger image files. Wikipedia had previously banned very large image files because loading them consumed too many resources. That has changed with the introduction of the new Mediawiki extension VipsScaler, created by the University of Southampton's Kirk Martinez and Imperial College London's John Cupitt. Martinez and Cupitt based VipsScaler on the fast, free VIPS image-processing system developed in the early 1990s. Whereas most image-processing systems load an entire image into a system's memory before recreating it in a multi-step process that consumes large quantities of working memory, VIPS chops images into tiles that it passes through a system's processor cores before reassembling them. The researchers say this approach consumes a comparatively small amount of system memory and can be even faster on multi-processor systems. "In the early days, speeding up the processing of one 1GByte image from minutes to one minute was the aim," says Martinez. "Problems now are often to process millions of images or terabytes of images." He notes that VipsScaler also will be able to efficiently downscale large TIFF images


Congress Is Told That Driverless Cars Are Coming--Sometime
The Washington Post (11/19/13) Ashley III Halsey

Witnesses offering testimony before the House Subcommittee on Highways and Transit on Tuesday agreed that autonomous vehicles will hit the mainstream, but suggested varying timelines for when that might take place. Automakers are already developing autonomous vehicles and have conducted road tests with model vehicles, and the requisite technology is being installed in production-line cars. "These vehicles could potentially be on the road by the end of the decade,” says Eno Center for Transportation president Joshua L. Schank. "The benefits from autonomous vehicles are substantial, but the barriers also are substantial." Autonomous vehicles could significantly reduce crashes and fatalities. "It's not a question of if, but when" autonomous vehicles reach the roadways, says Carnegie Mellon professor Raj Rajkumar. "This technology will basically prevent human beings from hurting themselves." The technology also would ease congestion, because computer controllers and linked vehicles could maintain speed most of the time and minimize delays. In addition, autonomous cars would provide increased mobility for people with disabilities as well as elderly and young people. Current barriers to widespread rollouts include a vehicle cost that could initially be as much as $100,000, privacy and hacking concerns, and liability protections for automakers in the event of a collision.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Top500 Shows Growing Inequality in Supercomputing Power
Computerworld (11/20/13) Joab Jackson

Although the fastest supercomputers are becoming increasingly faster, midlevel systems are not realizing gains at the same pace, according to the most recent Top500 list of high-performance computers. Half of the total 250 petaflop/s (quadrillions of calculations per second) of supercomputing power on the list is attributed to the top 17 entrants, notes Top500 ranking organizer Erich Strohmaier. "The list has become very top heavy in the last couple of years," Strohmaier says. "In the last five years, we have seen a drastic concentration of performance capabilities in large centers." Governments and industry alike are purchasing fewer midsized systems and focusing on developing fewer, larger systems. This trend could gradually reduce the number of administrators and engineers skilled in running high-performance computers, although this might not be an issue because most of the largest systems are shared across multiple users. Australia's Commonwealth Scientific and Industrial Research Organization research scientist Alfred Uhlherr suggests the trend could be due in part to the fact that some organizations choose not to participate in the Top500 list when they know their supercomputers will not place toward the top. Another potential deterrent to participating in the Top500 is the time-consuming Linpack benchmark that supercomputers must run to be considered.


Carnegie Mellon Computer Searches Web 24/7 to Analyze Images and Teach Itself Common Sense
Carnegie Mellon News (PA) (11/20/13) Byron Spice

Carnegie Mellon University (CMU) researchers have developed the Never Ending Image Learner (NEIL), software that searches the Web for images, attempting to understand them on its own and, as it builds a visual database, develop common sense. The researchers say NEIL utilizes recent advances in computer-vision technology that enables it to identify and label objects in images, to characterize scenes, and to recognize aspects such as colors, lighting, and materials. They say the data NEIL generates will further enhance the ability of computers to understand the visual world. NEIL also makes associations between the objects to obtain common sense information that humans often take for granted. Images include a lot of common sense information about the world, and while "people learn this by themselves and, with NEIL, we hope that computers will do so as well," says CMU professor Abhinav Gupta. A computer cluster has been running the NEIL program since late July and already has analyzed 3 million images. "What we have learned in the last five to 10 years of computer vision research is that the more data you have, the better computer vision becomes," Gupta says.


Obama Puts $100-Million Behind Quest to Improve Tech Training
Chronicle of Higher Education (11/20/13) Katherine Mangan

President Barack Obama on Nov. 19 unveiled the Youth CareerConnect program, a $100-million grant competition to encourage educators and executives to work together to enable high school students to earn college and industry credentials. Schools designed around such technology objectives would create students "better equipped for the demands of a high-tech economy," Obama says. The Pathways in Technology Early College High School (P-TECH), developed with the City University of New York and IBM in Brooklyn, N.Y., is a national model for this new type of school. Students at P-Tech begin in ninth grade and graduate four to six years later with associate degrees in applied science in computer systems technology or electromechanical engineering technology. Two schools similar to P-TECH opened in New York City this year and additional schools are planned, and five similar schools opened in Chicago in partnership with such firms as Verizon and Microsoft. Youth CareerConnect might produce more tech schools next year, as it begins to award 25 to 40 grants, with a 25-percent matching requirement from the recipient. IBM International Foundation president Stanley S. Litow, speaking at a House hearing for the reauthorization of an existing, $1-billion technical-training program, says the new programs are aligned to "real jobs and needed skills."
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 


IU Digital Archaeologist to Unveil Ancient Roman Emperor's Villa, Virtually, on Friday
IU News Room (11/19/13) Stephen Chaplin

Indiana University archaeo-informaticist Bernie Frischer and an international team will unveil a virtual recreation of a second century Imperial Roman villa in Washington D.C. on Nov. 22. Frischer and his colleagues from Ball State University, the Italian Ministry of Culture, and several other organizations recreated the villa of the Roman Emperor Hadrian using the Unity gaming engine. It took the team several years to painstakingly recreate the details of the 250-acre estate, which included dozens of buildings, including temples, libraries, banquet halls, and the so-called Maritime Theater, a circular island surrounded by a mote that housed Hadrian's private quarters. Users can explore the virtual villa using several different avatars, taking on the role of anyone from a slave to a senator, and observe the activities of non-player characters recreating historically accurate activities including worship ceremonies and court visits. The project is complemented by a website featuring information and images of the villa, a World Heritage Site, as it exists today. “The website makes it possible to study the state of the ruins today, including many sites on private land or in parts of the archaeological park closed to the public," Frischer says.


Internet Engineers Plan a Fully Encrypted Internet
Technology Review (11/18/13) David Talbot

In response to the U.S. National Security Agency's (NSA) Internet surveillance scandal, the Internet Engineering Task Force (IETF) wants to encrypt all Web traffic, and expects a revamped system to be ready to launch by the end of 2014. The IETF change would introduce encryption by default for all Internet traffic, and the work to make this happen in the next generation of the hypertext transfer protocol (HTTP), known as HTTP 2.0, is proceeding "very frantically," says Trinity College researcher Stephen Farrell. Some experts say the new protocol would make spying more difficult for agencies such as the NSA, which could force them to focus on specific national security targets rather than bulk data collection. "I think we can make a difference in the near team to have Web and email encryption be ubiquitous," Farrell says. The IETF also is improving security in email and instant message traffic. A variety of other potential technical avenues for improving Internet privacy was recently outlined by Google's Tim Bray. "This is a policy problem [and] not a technology problem; but to the extent that anything can be done at the technology level, a lot of the people who can do it are [at the IETF]," Bray says.


Scientists Create Mega Quantum System Cluster
Computerworld Australia (11/18/13) Byron Connolly

Scientists at the University of Sydney, the University of Tokyo, and the Australian National University say they have built the world’s largest quantum circuitboard. The quantum circuitboard unifies 10,000 quantum systems in a single component, marking a threefold increase in magnitude over the closest competing design. "The scalability afforded by transistors enabled the explosion in computing technology we've seen in the last 65 years," says the University of Sydney's Nicolas Menicucci. "Similarly, this breakthrough promises scalable design of laser-light quantum computing hardware." Real-world quantum computers would enable scientists to solve challenging computational quandaries that are beyond the power of today's supercomputers. "Huge advances in telecommunications, physics, and counterintelligence are possible when we have devices with such immense computational power," Menicucci says, noting that precise control of tiny quantum systems and scalability are the two main obstacles to working quantum computers. "We have made a breakthrough in scalability for the basic 'circuitboard' of a quantum computer made out of laser light," Menicucci says. He says additional progress toward precise control of quantum systems will be necessary to leverage this scalability breakthrough.


Paving the Way for More Efficient, Video-Rich Internet
CORDIS News (11/18/13)

Online video accounts for 90 percent of Internet traffic, but the Web was not designed with videos in mind and its architecture is very inefficient when handling video traffic. The European Union's Multimedia Transport for Mobile Video Applications (MEDIEVAL) project was launched to design a new Internet architecture that can support the requirements of video traffic. To accomplish this, the researchers focused on specific enhancements making it possible to move video data from one computer to another. The researchers focused on enhanced wireless access support to optimize video performance, novel Internet Protocol mobility architecture adapted to the requirements of video traffic, transport optimization for video distribution, and network-aware video services that interact with underlying layers. The MEDIEVAL project resulted in technology designed to improve users' experiences. As of June 2013, the researchers had developed new architecture solutions and validated them in three separate valuation scenarios for Internet TV, personal broadcasting, and video-on-demand.


New Algorithms Improve Animations Featuring Fog, Smoke and Underwater scenes
Phys.Org (11/18/13)

A team from Disney Research, Zurich has developed joint importance sampling, a method to more efficiently render animated scenes that involve fog, smoke, or other substances that affect the travel of light. The researchers say their method significantly reduces the time necessary to produce high-quality images or animations. Joint importance sampling helps identify potential paths that light can take through foggy or underwater scenes. Disney researcher Wojciech Jarosz says traditional methods to produce noise-free images when rendering complex scenes can take days, however the new algorithms can reduce that time by a factor of up to 1,000 times. Jarosz notes that faster rendering enables artists to focus on the creative process rather than waiting for the computer to process the data. Joint importance sampling is a type of Monte Carlo algorithm, a group of programs that operate by analyzing a random sampling of possible paths that light might take through a scene and then averaging the results to create the overall effect. However, calculating those paths can be a waste of time or could lead to errors. The new algorithm overcomes these problems by choosing the locations along the random paths with mutual knowledge of the camera and light source locations.


Abstract News © Copyright 2013 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe