Welcome to the November 19, 2012 edition of ACM TechNews, providing timely information for IT professionals three times a week.
ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
As Boom Lures App Creators, Tough Part Is Making a Living
New York Times (11/17/12) David Streitfeld
The number of computer software engineers and app writers increased almost eight percent in 2010 to more than one million, according to the latest available U.S. government data for that category, even as the overall economy continued to struggle. In fact, there are nearly as many software engineers as there are lawyers in the U.S., and coders now outnumber farmers. Many software developers are working to take advantage of the potentially lucrative app market, as the market for smartphones has exploded. However, despite the mobile apps boom, only a small minority of developers actually makes a living by creating their own apps, according to surveys and experts. "Technology is always destroying jobs and always creating jobs, but in recent years the destruction has been happening faster than the creation," says Massachusetts Institute of Technology professor Erik Brynjolfsson. According to a recent TechNet study, 25 percent of respondents said they had made less than $200 in lifetime revenue from Apple, 25 percent of respondents had made more than $30,000, and four percent had made more than $1 million. "My guess is that very few developers make a living off their own apps," says 148Apps.com manager Jeff Scott.
A Leap Forward in Brain-Controlled Computer Cursors
Stanford University (11/18/12) Kelly Servick
Stanford University researchers have developed ReFIT, an algorithm that improves the speed and accuracy of neural prosthetics that control computer cursors. In a side-by-side comparison, the cursors controlled by the ReFIT algorithm doubled the performance of existing systems and approached the performance of a real arm. "These findings could lead to greatly improved prosthetic system performance and robustness in paralyzed people, which we are actively pursuing as part of the FDA Phase-I BrainGate2 clinical trial here at Stanford," says Stanford professor Krishna Shenoy. The system uses a silicon chip that is implanted in the brain. The chip records "action potentials" in neural activity from several electrode sensors and sends the data to a computer. The researchers want to understand how the system works under closed-loop control conditions in which the computer analyzes and implements visual feedback taken in real time as the user neurally controls the cursor toward an onscreen target. The system can make adjustments in real time while guiding the cursor to a target, similar to how the hand and eye work in tandem to move a mouse cursor. The researchers designed the algorithm to learn from the user's corrective movements, allowing the cursor to move more precisely than in other systems.
Computers Identify What Makes Abstract Art Move Us
New Scientist (11/16/12) Hal Hodson
University of Trento researchers have developed a machine-vision system that can measure how color and shapes are distributed in abstract art. The system also used data on how 100 volunteers responded to the paintings to determine the emotional impact of the artistic elements. To test the system, the researchers gave the program other works of art and asked it to predict the typical viewer's emotional response, ranging from extremely negative to extremely positive. About 80 percent of the time, the system was able to produce a score that matched the average response from a new group of 100 volunteers. The research could lead to using emotional data in the creation of more advanced machine art, says Penn State University professor James Wang. Similar machine-vision systems also could help improve the Painting Fool, an artificial intelligence-based program developed by Imperial College London researcher Simon Colton. He says that with the ability to identify those aspects of an image that elicit emotion, the Painting Fool could choose a basic theme for a piece, scan the Web to find strongly emotional images, and use the results to create original pieces.
World's Most Powerful Big Data Machines Charted on Graph 500
IDG News Service (11/16/12) Joab Jackson
Lawrence Livermore National Laboratory's Sequoia supercomputer ranked first in the most recent Graph 500 list, which tracks how well supercomputers handle big data-related workloads. The Graph 500 list is important because many high-performance computing (HPC) machines are being put to work on data analysis, instead of traditional tasks such as modeling and simulation. "Everyone has recognized that data is a new workload for HPC," says Georgia Tech professor David Bader. When compared to the Top500 supercomputer list, which tracks how effectively HPC systems execute floating-point operations, the Graph 500 places greater emphasis on how well a computer can search through a large data set. "Big data has a lot of irregular and unstructured data sets, irregular accesses to memory, and much more reliance on memory bandwidth and memory transactions than on floating-point performance," Bader says. Nine out of the top 10 systems on the most recent Graph 500 list are IBM BlueGene/Q models. Bader notes the Graph 500 is not a replacement for the Top500 list, but rather a complementary benchmark. He says other projects, such as the Green500 and the HPC Challenge, measure more aspects of supercomputing performance.
A World Without Limits
New technologies are blurring the boundaries between the real and virtual world, but it remains uncertain as to whether it can improve peoples' lives. For example, Barcelona University scientists are working to link a human brain to a robot using skin electrodes and video goggles so users feel as though they are actually in the android body. A remote avatar can be used to travel without leaving home, but several senses need to work together to make the experience more realistic. Touching virtual objects, feeling their texture and weight, will make the digital world more natural and easier to live in. The goal of another European project is to use virtual models to change the real world, making it more accessible. Scientists are using cameras and sensors to study how physically impaired people move, and using the data to simulate how they cope with everyday tasks. The models could enable industrial designers to better understand how safe and convenient new products will be, and better adapt them for people with physical limitations.
Do We Need Cyber Cops for Cars?
NextGov.com (11/13/12) Aliya Sternstein
The U.S. National Highway Traffic Safety Administration (NHTSA) has yet to devise standard safety guidelines for automobile electronics systems, and industry guidelines were presented to the administration last year by the Transportation Department's John A. Volpe National Transportation Systems Center. The center's experts told NHTSA that strong federal leadership is a necessity for creating sector-specific guidelines, and they recommended that the agency "get involved in the rule-making process early." NHTSA may be considering regulations according to its 2013 budget request, which includes $10 million to study cyberrisks. The proposed plan dovetails with a National Academy of Sciences study's recommendations for NHTSA to become more cyberproficient. "The kinds of things you worry about is either that your car is leaking information that you wish to be private, or that an adversary can control features of your car,” says University of California, San Diego professor Stefan Savage. Former NHTSA administrator John Maddox suggests that given the continuous evolution of hacking, vehicle manufacturers should develop a voluntary regime of standard cybersecurity safeguards. "The industry would be more knowledgeable and more nimble than government can be in this area," he says.
UMass, Georgia Tech to Share $6.2 Million Grant for Computer Education
The Republican (MA) (11/15/12) Fred Contrada
The University of Massachusetts (UMass) and Georgia Tech, backed by a $6.24 million U.S. National Science Foundation grant, will continue their partnership to share strategies for attracting students to computer science. More than 21,000 students and 1,200 educators have attended some 350 events sponsored by the alliance since its founding in 2007, according to UMass. The alliance has had significant success in helping community college computer science students transfer to four-year colleges. By combining and building on their work, UMass and Georgia Tech hope to share their knowledge with other states. "Right now there are just not enough people to fill the jobs," says Commonwealth Alliance for Information Technology Education project manager Renee Fall. She says women and other underrepresented sectors of the population can bring new perspective and fresh skills to computing and make the workforce more diverse overall. "Using computers is exciting and fun, but knowing how to program and design computers is a whole other level," Fall says.
The TV Is the New Tablet: How Gesture-Based Computing Is Evolving
Network World (11/14/12) Colin Neagle
The proliferation of smart TVs and gesture-based computing could enable viewers to mount and control everything they need on the living room wall. For example, the gesture-based technology MoveEye enables users to navigate a smart TV with hand gestures. However, for more intuitive navigation to become popular, the technology will have to work well enough to convince users to move away from traditional input devices. "It's interesting to see this generation of super-smart televisions and they all come with a remote that was built in the 1990s, which just seems kind of crazy," says Gartner's Stephen Prentice. He notes touchless, gesture-based computing also holds significant advantages over touchscreen technology in certain use cases. For example, although tablets can be very useful in the healthcare industry, they also can spread germs if several different doctors and nurses all use the same device. Touchless, gesture-based computing can solve this problem, Prentice says. He predicts that gesture-based computing will "come in at the high end, it'll be taken up by the technology progressives and within five or six years everyone will be doing it."
Bug Repellent for Supercomputers Proves Effective
Lawrence Livermore National Laboratory (11/14/12) Anne M. Stark
Lawrence Livermore National Laboratory (LLNL) researchers have developed the Stack Trace Analysis Tool (STAT), a highly scalable, lightweight tool that has been used to debug a program running more than one million MPI processors on the IBM Blue Gene/Q-based Sequoia supercomputer. The debugging tool is part of a multi-year collaboration between LLNL, the University of Wisconsin, Madison, and the University of New Mexico. The researchers say STAT has helped early access users and system integrators quickly isolate a wide range of errors, including complicated issues that only appeared at extremely large scales. "STAT has been indispensable in this capacity, helping the multi-disciplined integration team keep pace with the aggressive system scale-up schedule," says LLNL's Greg Lee. During testing, STAT was able to identify one particular rank process that was consistently stuck in a system call out of more than one million MPI processes, according to LLNL's Dong Ahn. "It is critical that our development teams have a comprehensive parallel debugging tool set as they iron out the inevitable issues that come up with running on a new system like Sequoia," says LLNL's Kim Cupps.
SGI Uses Big Data to Detect Twitter's 'Heartbeat'
CNet (11/14/12) Rachel King
SGI has embarked on a project to see if it can extract a general consensus on how Twitter users feel on most topics. SGI used its UV 2000 Big Brain data-mining computer and worked with University of Illinois researchers to analyze the entire Twitter feed for sentiments and volume in real time. The team combined geotagged tweets with a Twitter-focused sentiment engine, and was able to create a sophisticated streamlining map of the "global heartbeat" within Twitter. The Global Twitter Heartbeat project processes 10 percent of tweets daily as they are posted, then analyzes every tweet to assign location and tone values. The conversations are reformatted into a visualization, or a heat map infographic, which integrates tweet location, intensity, and tone into a unified geospatial perspective. The Global Twitter Heartbeat project has already analyzed tweets related to Hurricane Sandy and the U.S. presidential election.
The Once-Hyped Semantic Web Still Bubbles Along
The semantic Web has not reached blockbuster status, but it still has champions in the Apache Software Foundation and the World Wide Web Consortium (W3C). W3C will co-chair an industry track at the 11th Annual International Semantic Web Conference, and it recently announced that 11 semantic Web specifications had moved closer to standardization. Apache's Jena project provides tools and libraries to help developers build semantic Web and linked-data applications, tools, and servers. Apache Jena chairman Andy Seaborne says the standard is just reaching the final stages of the W3C process. The semantic Web "is about enabling search engines to understand the metadata embedded in Web pages, which might describe aspects other than what is marked up for formatting of the Web page," says IDC analyst Al Hilwa. He also notes that "this technology is still maturing and has not gained significant traction yet, though the concept of the semantic Web is increasingly embedded in the tagging present in many Web sites."
Keeneland Project Deploys New GPU Supercomputing System for the National Science Foundation
Georgia Tech News (11/13/12) Joshua Preston
The Keeneland Project, a collaborative effort between Georgia Tech, the University of Tennessee-Knoxville, the National Institute for Computational Sciences, and Oak Ridge National Laboratory, recently completed the installation and acceptance of the Keeneland Full Scale System (KFS). KFS is a supercomputing system designed to meet the compute-intensive needs of a wide range of applications through the use of NVIDIA graphics processing unit (GPU) technology. The researchers note that KFS is the most powerful GPU-based supercomputer available for research through the U.S. National Science Foundation's Extreme Science and Engineering Discovery Environment (XSEDE) program. KFS has 264 nodes, and each node contains two Intel Sandy Bridge processors, three NVIDIA M2090 GPU accelerators, 32 GB of host memory, and a Mellanox InfiniBand FDR interconnection network. During KFS' installation and acceptance testing, the Keeneland Initial Delivery System was used to start production capacity for XSEDE users seeking to run their applications on the system and who had received allocations for Keeneland through a peer review process. "Our Keeneland Initial Delivery system has hosted over 130 projects and 200 users over the past two years," notes the Keeneland Project's principal investigator Jeffrey Vetter.
Abstract News © Copyright 2012 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.