Association for Computing Machinery
Welcome to the July 8, 2013 edition of ACM TechNews, providing timely information for IT professionals three times a week.

ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Technology Workers Are Young (Really Young)
New York Times (07/05/13) Quentin Hardy

Technology industry employees are significantly younger, on average, than those in the workforce overall, according to PayScale, which recently conducted a study to determine the median age of workers at many of the most successful companies in the technology industry, as well as information on gender and years of experience. Just six of the 32 companies in the study had a median age greater than 35 years old, and eight of the companies had a median employee age of 30 years old or younger. The overall median age of U.S. workers is 42.3 years, according to the U.S. Bureau of Labor Statistics. The technology companies with the youngest workers are Epic Games, Facebook, Zynga, Google, AOL, Blizzard Entertainment, InfoSys, and Monster.com. The technology companies with the oldest workers include Hewlett-Packard, IBM, Oracle, Nokia, Dell, and Sony. "The firms that are growing or innovating around new areas tend to have younger workers," says PayScale's Katie Bardaro. She notes that a key factor is the type of skills an employee offers. "Baby Boomers and Gen Xers tend to know C# and SQL," Bardaro says. "Gen Y knows Python, social media, and Hadoop."


CSIR to Launch Its Fastest Supercomputer in Bangalore
SiliconIndia (07/07/13)

The Council of Scientific and Industrial Research's (CSIR) Fourth Paradigm Institute (4-PI) is planning to launch its fastest supercomputer at India's Center for Mathematical Modelling and Computer Simulation (CMMACS), which focuses on data-intensive scientific discovery. The new supercomputer will have a speed of 360 teraflops, making it the fourth-fastest system in India. CSIR has identified five domain areas for the new supercomputer, including earth system sciences, medical informatics, biomedical informatics, chemical sciences, and physical sciences. The supercomputer will form the backbone of the new program, and will connect with CSIR laboratories located in Delhi, Hyderabad, Pune, Srinagar, Chennai, Chandigarh, and Nagpur. Each of the laboratories will have computing facilities offering a capacity of 10 to 50 teraflops. It will be linked to the supercomputer using the National Knowledge Network, enabling it to connect 200 scientists and more than 1,000 students in the identified domain areas.


Computer Programs Improve Fingerprint Grading
Penn State Live (07/03/13) Hannah Y. Cheng

Pennsylvania State University (PSU) researchers have developed a process using three inexpensive computer programs to grade a fingerprint for the availability of ridge detail for subsequent identification. The researchers say computerized grading ensures standardized evaluation to a degree finer than any human can accomplish. The three programs include the U.S. Federal Bureau of Investigation's Universal Latent Workstation, the open source image editor GIMP, and a custom program written in Mathematica to count pixels. The Universal Latent Workstation creates a simplified map of the fingerprint by designating colors to four area types. The GIMP editor converts the map file into an image file with red-green-blue color values. Finally, the pixel-counting algorithm calculates the total percentage of white pixels from imported RGB pictures, creating a grading scale ranging from 0 to 100. The researchers say the ease and relative speed of their grading system could help to standardize fingerprint quality assessment in an inexpensive, efficient manner. "The next step of this kind of research is, is there false detail created by development techniques?" says PSU professor Akhlesh Lakhtakia. "Looking at the thin-film technique that my group has developed, I don't imagine so, but we would obviously have to prove it."


Protecting Data in the Cloud
MIT News (07/02/13) Larry Hardesty

Massachusetts Institute of Technology (MIT) researchers have developed Ascend, a type of secure hardware component that can disguise a server's memory-access patterns, making it impossible for an attacker to infer anything about the data being stored. Ascend also stops timing attacks, which attempt to infer information from the amount of time that computations take. "This is the first time that any hardware design has been proposed--it hasn’t been built yet--that would give you this level of security while only having about a factor of three or four overhead in performance," says MIT professor Srini Devadas. The system involves arranging memory addresses in a data structure known as a tree. Every node in the tree lies along some path that starts at the top and passes from node to node, without backtracking, until arriving at a node with no further connections. Ascend prevents attackers from inferring anything from sequences of memory access by randomly swapping that address with one stored somewhere else in the tree. Therefore, accessing a single address multiple times will very rarely require traversing the same path.


EU and Japan Team Up for 100 Gbps Internet Connections
V3.co.uk (07/03/13) Dan Worth

A partnership between the European Union and Japan could lead to the creation of more efficient networks that can handle vast amounts of data. Researchers will undertake projects in areas that include cybersecurity, network capacity, storage, high-density data traffic, and energy efficiency. The Strauss project will focus on developing network technologies that offer speeds of 100 Gbps, which is about 5,000 times faster than the average European broadband speed of about 20 Mbps. Currently, 1.7 million billion bytes of data are sent per minute, but traffic volumes are expected to grow 12-fold by 2018, according to the European Commission. Researchers will design an advanced optical Ethernet transport architecture that leverages software-defined networking principles, optical network visualization, and flexible optical circuit and packet-switching technologies beyond 100 Gbps. "Our future Internet should know no barriers, least of all barriers created because we did not prepare for the data revolution," says the EC's Neelie Kroes.


A Retrospective Report of the Simons Institute Visions on the Theory of Computing Symposium
Computing Community Consortium (06/28/13) Ann Drobnis

University of California, Berkeley professor Christos Papadimitriou co-organized the Simons Institute Visions on the Theory of Computing Symposia, sponsored by the Computing Community Consortium, which took place in May. Papadimitriou recently provided a retrospective overview of the presentations, including "What Should a Computational Theory of Cortex Explain?," presented by Harvard University's Leslie Valiant. Valiant developed a theory of the brain, defining a hierarchy of increasingly complex tasks that progresses from communication to computation, learning, and evolution. He then described a research project for developing the theory, which would involve modeling the brain and providing concrete algorithms for basic, important, and quantitatively challenging tasks, using random graph theory. In another presentation, Coursera co-founder Daphne Koller discussed the future of online learning. Koller believes universities will stop focusing on providing academic content, which will be easily accessible online, and instead emphasize tasks that are difficult to teach online, such as discussing material, teaching problem-solving, and writing proofs and algorithms. Other presentation topics included market design and computer-assisted markets, evolution and computation, programming nanoscale structures using DNA-based information, and creating intelligent machines by modeling the brain.


App to Trace the Impact of Texts Over Time
CORDIS News (07/02/13)

The European Research Council and the European Union are funding an initiative that is developing a tool to track the impact of digital texts over time. The Web application would enable users to measure and visualize the impact of research and the 'reach' of messages in forms such as blogs, news, and commentary. The app from the IMPACTTRACER project would analyze texts available in the digital world, from ancient scriptures to the latest news and press releases. For example, the software could be used to track how much influence a particular press release has on subsequent blog posts or social media. The tool also could provide ad agencies, non-governmental organizations, and political parties with a cost-effective and efficient means of monitoring their public pronouncements and corporate messages, and analyzing possible trends over time. Professor Christoph Meyer from King's College London is leading the project. "We are very excited about this opportunity to develop a tool with a wide range of applications in academic and commercial research," Meyer says. "We hope to improve techniques used to make large data sets more accessible and relevant to social science studies."


Humanoid Robot That Sees and Maps
University of Bristol News (07/01/13)

University of Bristol researchers have developed computer-vision algorithms that enable Roboray, a humanoid robot created by Samsung, to build real-time three-dimensional visual maps for more efficient movement. Roboray uses cameras to build a map reference relative to its surroundings, allowing the robot to remember where it has been before. The robot also uses dynamic walking, which means that the robot is falling at every step, using gravity to carry it forward while conserving energy. However, this way of walking is more challenging for the computer-vision algorithms as objects in images move more quickly. "Robots that close the gap with human behaviors, such as by featuring dynamic walking, will not only allow more energy efficiency but be better accepted by people as they move in a more natural manner," says Bristol researcher Walterio Mayol-Cuevas. He says the rapid 3D visual-mapping technology is a major breakthrough because of its ability to track and recover from rapid motions and occlusions, which is essential for when the humanoid moves and turns at normal walking speeds.


Y U No Go Viral: The Emerging Science of Memes
The Atlantic (06/28/13) Christopher Mims

Scientists are beginning to study how memes are created, which ones fail, and how certain memes go viral. In 1976, British evolutionary biologist Richard Dawkins first defined the term 'meme' as any idea that excels at sharing and advancing its concept from one mind to another. Dawkins compared memes to genes, capable of reproducing, mutating, and spreading through a population. The Internet has significantly increased the growth of memes, especially those that involve a phrase or visual image. Harvard University's Michele Coscia believes the likelihood of an Internet meme going viral can be shown at any point on a decision tree. To create the decision tree, Coscia culled 178,801 variants of 499 memes from meme clearinghouse Quickmeme. The number at the top of the tree represents the total percentage of memes deemed successful, based on their score on Memebase, which allows users to vote memes up or down. Coscia found that memes attaining an above-average peak of popularity at a particular stage were less likely to eventually break the success threshold than memes that were shared more evenly over time. In addition, memes with a popularity increase linked to another meme's decrease are more likely to succeed in the long term, Coscia found.


Crisis Cleanup Mapping Tool Coordinates Rebuilding Efforts
Government Technology (06/28/13) Hilton Collins

Violent weather last summer led Mormon Helping Hands disaster coordinator Aaron Titus to create Crisis Cleanup, a free, open source mapping tool that enables disaster relief organizations to coordinate cleanup and rebuilding efforts after catastrophes. Titus says the need to dispatch hundreds of volunteers to hundreds of locations called for the creation of a program to coordinate the effort. "I realized, if you try to do it as a single individual, you're never going to be able to," he says. Titus developed the first version of the program, and it has undergone several modifications since. The data collected by Crisis Cleanup includes the resident's address and the type of incident, and the information generates icons on a dynamic map alongside the assessment form. Each responding organization can change the status of a work order after responding to it, so the next organization can pick up where the previous organization left off, a feature that helps ensure that groups with specific specialties can handle different relief needs at the same disaster site without overlap. The Crisis Cleanup process allows for a more seamless and collaborative environment for disaster relief organizations, says developer Andy Gimma. The program has connected more than 30,000 volunteers from 100 organizations with about 8,000 families since the summer of 2012.


UComp Research Project Delivers First Results Under Open Source License
MODUL University Vienna (Austria) (06/28/13)

MODUL University Vienna researchers have developed an open source tool that supports text acquisition, language recognition, the detection of phonetic similarities, and the standardized integration and archiving of captured information. The researchers say the project combines state-of-the-art methods to automatically capture information from complex sources with collective human intelligence. "The technologies from the uComp project provide us with better ways to capture opinions--on a global basis, irrespective of language barriers, national borders, and cultural differences," says MODUL professor Arno Scharl. He notes that a key aspect of uComp is the combination of collective human intelligence and automated knowledge extraction by software tools. Scharl says the first step to achieving this goal has been taken successfully with the extensible Web Retrieval Toolkit (eWRT), which promotes a transparent approach to analyzing data from social media platforms. "The uComp project will advance the state of the art by offering all these capabilities in an integrated, reusable framework," he says. Scharl notes the next phase of the project will focus on using collective human intelligence to analyze and validate data gathered by eWRT.


Machine-Learning Project Sifts Through Big Security Data
Dark Reading (06/28/13) Robert Lemos

Information-security consultant Alexandre Pinto has led the development of a machine-learning system designed to aid companies in their efforts to monitor their networks for threats and as data volume grows. The system is designed to take logs and identify traffic that originates from suspicious areas of the Internet. The MLSec project uses supervised learning algorithms to identify networks that are home to malicious actors, and it appeared to be accurate in 92 percent to 95 percent of cases in tests. The system creates its own representation of the Internet, does not need to be told which networks are malicious, and can be constructed to adapt faster than a human to attacker tactics. The system will be made available to organizations to upload to their firewall logs, and the findings will be summarized in a report. Pinto says the MLSec project's approach should boost data security. "In machine learning, of course, the algorithm is important, but the more data that you throw at it, the better," he says. "The more you are attacked, the better your defenses should get."


Abstract News © Copyright 2013 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe