Association for Computing Machinery
Welcome to the February 1, 2013 edition of ACM TechNews, providing timely information for IT professionals three times a week.

ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Experts Discuss the Future of Supercomputers
HPC Wire (01/29/13) Tiffany Trader

Science Live recently hosted an online chat with University of Tennessee professor Jack Dongarra and Lawrence Berkeley National Laboratory deputy director Horst Simon. Dongarra and Simon discussed the coming class of exascale systems and the research into developing supercomputers that are hundreds of times faster than today's state-of-the-art systems. Dongarra says the benefits of exascale computing reach into every segment of technology and science, including energy research, life science, manufacturing, and entertainment. Simon says the major barrier to developing exascale systems is power consumption. "Extrapolation from today's technology to the exascale would lead to systems with 100 MW or more power requirements," he says. Breaking the exascale barrier will require a major effort on multiple fronts, in addition to drastic changes to hardware, software, algorithms, and applications, say Dongarra and Simon. Another hurdle will be the financial cost it will take to develop exascale systems; Dongarra notes that building an exascale-class system by 2020 would cost about $200 million, not including the cost of the research that comes before that.


3D Microchip Created
University of Cambridge (01/31/13)

University of Cambridge researchers have developed a type of microchip based on spintronics that allows information to travel in three dimensions. To create the microchip, the researchers used an experimental technique called sputtering, which involves layering cobalt, platinum, and ruthenium atoms on a silicon chip. The cobalt and platinum atoms store the digital information similar to the way hard disk drives store data: the ruthenium atoms act as messengers, communicating data between other layers of cobalt and platinum. The researchers then used a laser technique called MOKE to probe the data content of the different layers. As the researchers switched a magnetic field on and off, they used the MOKE signal to watch the data climb layer by layer from the bottom of the chip to the top. "I find it amazing that by using nanotechnology not only can we build structures with such precision in the lab, but also using advanced laser instruments we can actually see the data climbing this nano-staircase step by step," says Cambridge professor Russell Cowburn. "This is the 21st century way of building things--harnessing the basic power of elements and materials to give built-in functionality.”


Computer Scientists Find New Shortcuts for Infamous Traveling Salesman Problem
Wired News (01/30/13) Erica Klarreich

Researchers from Stanford and McGill Universities in 2011 discovered a way to improve upon a 35-year-old algorithm that, up until then, was the best solution to the vexing traveling salesman problem, and that breakthrough inspired other researchers to reexamine the problem and devise significantly better algorithms. The traveling salesman problem asks for the shortest route that visits every city in a group of cities linked by highways, then returns to the original city. Although it would appear the problem could easily be solved by checking each round-trip route to determine which is shortest, as more cities are added to the group, the number of round-trips quickly outpaces the ability of even the fastest computers to calculate the answer, reaching over 87 billion possible routes with just 15 cities. In 1976, professor Nicos Christofides developed an algorithm that finds a route no more than 50 percent longer than the shortest route, and that stood as the best solution until the Stanford-McGill team found a way to beat Christofides' algorithm. Following the Stanford-McGill team's breakthrough, other teams used rounding methods to produce even better algorithms, with the current record able to find a route no more than 40 percent longer than the shortest.


Computer Science Part of English Baccalaureate
BBC News (01/30/13) Sean Coughlan

Britain is adding computer science to its English Baccalaureate (EBacc), and will count the subject as a science option toward the qualification. Students pursing the EBacc, one of the measures used in school league tables, must obtain good grades in core subjects such as English, math, sciences, humanities, and language. Last January, Education Secretary Michael Gove announced that he was replacing the information and communications technology curriculum in schools with a more challenging computer science curriculum, developed to fulfill the requirements of technology companies. A panel of technology experts, including representatives from Google and Microsoft, called for the inclusion of computer science in the EBacc in October. The announcement "marks a significant further investment in the next generation of British computer scientists," says a Google spokesperson. A spokesperson for the U.K. Department of Education says "having computer science in the EBacc will have a big impact on schools over the next decade. It will mean millions of children learning to write computer code so they are active creators and controllers of technology, instead of just being passive users."


Microsoft to Big Data Programmers: Try F#
InfoWorld (01/31/13) Paul Krill

Microsoft's F# object-oriented functional programming language is designed for data-oriented programming, as well as parallel programming and algorithmic development. In an interview, F# developer and Microsoft researcher Don Syme notes that one primary difference between F# and other programming languages such as C#, C++, and Visual Basic is that F# is a functional-first language and, in many ways, a data-first programming language. "The construction of the language is carefully designed to facilitate data-oriented problem-solving and manipulation in a functional programming way," he says. "One of the key aspects of functional programming is to reduce the bug rate for doing routine manipulations over data structures." A key characteristic of a functional language is that data is represented in a way that is immutable, which means that users have descriptions of the data. F# reduces the time to deployment for analytical software components, Syme says. He notes that F# also is very good at parallel programming because it uses a functional, stateless approach. F# also can be used for server-side Web programming. "The Visual F# tools aren't particularly aimed at client-side Web programming, but there are other F# tools provided by third-party companies," Syme notes.


Lane-Swapping Helps Autonomous Vehicles Avoid Collisions, Study Finds
Inderscience Publishers (01/28/13)

Loughborough University researchers are developing algorithms that could help autonomous vehicle developers include escape maneuvers to allow vehicles to quickly and safely change lanes to avoid collisions. The researchers have developed a computer simulation that optimizes collision-avoidance parameters concurrently during a safety maneuver and shows how speed reduction and changing lanes could work with an autonomous vehicle. The optimal rapid lane change would be an aggressive, high-g maneuver that would destabilize the vehicle, requiring additional computing power to act quickly to correct understeer and other issues that arise during and after such a vehicle movement. The simulations show that at 70 miles per hour, braking alone will not always lead to a safe outcome, and in many situations a lane swap would almost certainly be needed. The present limitations of on-board computing power in autonomous vehicles and the need for high-speed data streams measuring real tire friction coefficients and other variables mean that the algorithm is currently limited to the simulation. The research could lead to the development of more powerful, safety-aware driving systems.


Cornell Engineers Solve a Biological Mystery and Boost Artificial Intelligence
Cornell University (01/29/13) Blaine Friedlander

Researchers at Cornell University, the University of Wyoming, and the Pierre-and-Marie-Curie University have used 25,000 simulated generations of evolution within computers to determine why biological networks tend to be organized as modules, a finding that will lead to a better understanding of the evolution of complexity. The discoveries also could help evolve artificial intelligence, enabling robot brains to develop more like animal brains. Researchers assumed that modules evolved because entities that were modular could respond to change more quickly, and therefore had an adaptive advantage over non-modular competitors. However, the researchers found that evolution produces modules because modular designs have fewer and shorter network connections, which are costly to build and maintain. The researchers tested the theory by simulating the evolution of networks with and without a cost for network connections. "Once you add a cost for network connections, modules immediately appear," says Wyoming professor Jeff Clune. "Without a cost, modules never form. The effect is quite dramatic." The researchers say the simulation's outcome may help explain the almost universal manifestation of modularity in biological networks as diverse as neural networks and vascular networks, gene regulatory networks, protein-protein interaction networks, metabolic networks, and man-made networks such as the Internet.


Streaming Video Over Temporary Networks
Research Council of Norway (01/28/13) Kristin Straumsheim Grønli

Norwegian researchers are developing software designed to make it easier to transmit video streams over networks in hard-to-access areas. The researchers are focused on Delay Tolerant Streaming Services systems, which are a type of network that tolerates disruptions and delays. They are studying what it will take to enable a mobile ad-hoc network to manage relatively large amounts of multimedia data, such as video, adequately in emergency and rescue operations. "First and foremost, we are concentrating on transmitting multimedia data over mobile ad-hoc networks with an eye to use in emergency and rescue operations in areas with no permanent data infrastructure," says University of Oslo professor Thomas Plagemann. The researchers have already developed several partial solutions, and the next step is to combine them into a complete action. One of the solutions utilizes a process called cross-layer optimization. Whereas the rule in network programming is to address only the layer directly under the layer being worked on, Plagemann’s group has developed middleware that simultaneously processes all of the layers. "We don’t want the applications programmer to have to worry about where the data are being sent or what type of device or operating system is in use," Plagemann says.


Rutgers Physics Professors Find New Order in Quantum Electronic Material
Rutgers University (01/31/13) Carl Blesch

Rutgers University professors have proposed an explanation for a new type of order in a rare material made with uranium, which eventually could lead to enhanced computer displays and data storage systems. Electrons behave like tiny magnets, and normally these magnets can point in any direction; however, when they flow through the new material, they come out with their magnetic fields aligned with the material's main crystal axis. Rutgers professor Piers Coleman says the effect stems from a new type of hidden order in the material’s magnetic and electronic properties. Changes in order are what make liquid crystals, magnetic materials, and superconductors function and perform useful tasks. "Our quest to understand new types of order is a vital part of understanding how materials can be developed to benefit the world around us," Coleman says. Recent experiments on the material at the National High Magnetic Field Laboratory at Los Alamos National Laboratory provided the researchers with data to refine their discovery. "This new category of order may open the world to new kinds of materials, magnets, superconductors, and states of matter with properties yet unknown," says Rutgers professor Premala Chandra.


Bandwidth-Sharing App Brings Connectivity to All
New Scientist (01/26/13) Hal Hodson

The Massachusetts Institute of Technology's Eyal Toledano has developed AirMobs, an app that would allow smartphone owners to share their mobile Internet connections, enabling others around them to avoid roaming charges and steep overage fees. For every kilobyte shared via the app, the smartphone owner will receive a data credit that can be used later. AirMobs shares a smartphone's data plan with others through its Wi-Fi signal. In locations where one carrier has coverage, the app can provide connectivity for all smartphones. Users will be able to choose how much data they want to share, and the app regularly checks the smartphone's battery life and strength of cellular connection. AirMobs also detects movement, and when conditions are right, the Wi-Fi transmitter switches on automatically, enabling others to connect. Toledano has tested the app, but concerns about objections from cellular carriers have stopped him from releasing it to the Google Play Store. "If networks decided to collaborate and let all devices roam freely, AirMobs would be less needed," Toledano says. "But where operators aren't collaborating, user-to-user collaboration can fix the situation."
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Chicago Pilot Aims to Enhance STEM Education
Government Technology (01/30/13) Tanya Roscorla

Chicago has launched a new pilot program designed to introduce students to a potential career path in science, technology, engineering, and math (STEM). Chicago Public Schools, the City Colleges of Chicago, and the Starter League have partnered to offer Web development courses to students at select high schools and City Colleges next fall. The Starter League, formerly called the Code Academy, will train educators and work with schools and colleges to design the Web development courses. More than 12 teachers will receive training this summer. The Starter League teaches beginners how to code, design, and ship Web applications. "We know this is where society is evolving, where the workforce is evolving, and we need our students prepared," says Beth Swanson, deputy chief of staff for education in the Office of Mayor Rahm Emanuel. Although Chicago's schools already offer computer science courses, that does not mean that students will be prepared for jobs in the field, says Starter League CEO Neal Sales-Griffin. "I looked at all the classes that were being offered and programs that had been implemented, and nothing really focused on the practical nature of how software is built," Sales-Griffin says.


Wanted: 40 Trillion Gigabytes of Open Storage, Stat!
PC World (01/29/13) Jon L. Jacobi

IDC predicts a 50-fold increase in the total amount of digitally stored data between 2010 and 2020, which means that by the end of the decade the world's data footprint will reach 40 trillion gigabytes. Conventional mechanical storage devices such as USB flash drives, solid-state drives, and optical drives will evolve to help reach this milestone. Standard hard drives will remain the dominant storage mechanism for the world's data in the near future, say Gartner analysts John Monroe and Joseph Unsworth. Hard drive performance and capacity will continue to see improvements in the coming years, such as helium-filled drives aimed at enterprise and cloud storage systems that offer a 40-percent increase in drive capacity and a 20-percent improvement in energy efficiency. In addition, heat-assisted magnetic recording and self-ordered magnetic arrays could deliver a 10-fold or greater increase in areal density. In the future, consumers will use optical media mostly as a physical delivery system for entertainment and software, and occasionally as emergency boot media. By 2016, the amount of personal data stored in the cloud will rise to 36 percent, compared to just 7 percent today.


Meet the Programming Language That Uses Arabic Script
The Register (UK) (01/25/13) Neil McAllister

A computer scientist has written the first programming language based on Arabic script, chosen equally for its aesthetic and programming values. Named "heart" in Arabic, the new language is "the first programming language that is a conceptual art piece," says computer scientist Ramsey Nasser who developed the code. To create artistic code, he can change the lengths of the lines that connect the Arabic letters. Another reason Nasser chose Arabic is that he wanted to move away from the Euro-centric programming that presents an obstacle to people from other areas. "If we are going to really push for coding literacy, which I do, if we are going to push to teach code around the world, then we have to be aware of what the cultural biases are and what it means for someone who doesn't share that background to be expected to be able to reason in those languages," Nasser says. In addition to filling aesthetic and cultural voids in programming, he says the code also is Turing-complete and can execute any type of computation, including the Fibonacci sequence.


Abstract News © Copyright 2013 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe