Association for Computing Machinery
Welcome to the December 19, 2014 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).


German Researchers Discover a Flaw That Could Let Anyone Listen to Your Cell Calls
The Washington Post (12/18/14) Craig Timberg

German researchers have discovered security flaws that could enable hackers, spies, and criminals to listen to private phone calls and intercept text messages. This revelation is just the most recent indication of widespread insecurity on the SS7 network. The flaws are actually functions built into SS7 for other purposes that hackers can repurpose for surveillance because of the lax security on the network. Although researchers did not find evidence that their latest discoveries have been marketed to governments on a widespread basis, vulnerabilities publicly reported by security researchers often turn out to be tools long used by secretive intelligence services, such as the U.S. National Security Agency or Britain's GCHQ, but not revealed to the public. The researchers found two distinct ways to eavesdrop on calls using SS7 technology. In the first, commands sent over SS7 could be used to hijack a cellphone's forwarding function. In the second technique, hackers would use radio antennas to collect all the calls and texts passing through the airwaves in an area. The researchers also discovered new ways to track the locations of cellphone users through SS7. In addition, they found it was possible to use SS7 to learn the phone numbers of people whose cellular signals are collected using surveillance devices.

Government to Lead Review Into Computer Science Degrees (12/17/14) Caroline Baldwin

The British government launched a 6 billion British pound Science and Innovation strategy, including the funding of an independent review into computer science degree accreditation to improve quality and graduate employability. The review is part of a wider review into science, technology, engineering, and math (STEM) degree accreditation. "We set out here a range of measures which will develop and support the brightest minds through the pipeline from primary and secondary school, further and vocational education, undergraduate and postgraduate study, and training into the workplace," says the U.K. government. The industry has responded to the initiative, especially members of the Tech Partnership, a network of employers collaborating to accelerate the growth of the digital economy. Meanwhile, the Digital Youth Council thinks teachers need more training and resources to equip students with important digital skills. The council has conducted research on the issue, and found students aged nine to 17 said teachers recognized the importance of making STEM education part of their lessons, but unreliable equipment and a lack of training were deterring many from adopting new approaches.

An Innovative Algorithm Automatically Finds the Quickest Way to Calamity-Affected Sites Using Open Source Map Data
A*STAR Research (12/17/14)

Researchers at the A*STAR Institute of High Performance Computing have developed a computer model that analyzes networks of interconnected roads to predict the fastest routes for rescuers to take using real-time data uploaded by aid workers on the ground. The model aims to quantify relationships in complex networks by graphing the connections between individual objects. The researchers say a mathematical analysis of these parameters can reveal important characteristics, such as the size and strength of connections between specific nodes. The researchers developed a procedure that automatically transforms street maps into a network of nodes, representing road intersections, and edges, representing road segments. They also developed algorithms that calculate the minimum time needed and best route for rescuers to take between any two nodes. "This work can be crucial in formulating contingency plans for disaster relief operations," says A*STAR researcher Christopher Monterola. "It shows that a network-science-based tool, driven by actual data, can guide logistics planning in areas hit by calamities."

Teaching Robots to See
Victoria University of Wellington (New Zealand) (12/15/14)

A computer-vision algorithm developed by a Ph.D. student at the Victoria University of Wellington could enable robots to view static images more like humans. Some object-detection algorithms focus on patterns, textures, or colors, while others target the outline of a shape. However, the imaging technology of Syed Saud Naqvi selects the best algorithm to use on an image by extracting the most relevant information for making a decision. "The defining feature of an object is not always the same--sometimes it's the shape that defines it, sometimes it's the textures or color," notes Naqvi. He presented his work this year at the Genetic and Evolutionary Computational Conference in Vancouver and won a Best Paper Award. Naqvi plans to participate in a project that will have a real-world robot use his algorithm for object-detection tasks. Victoria senior lecturer Will Browne, one of Saud's supervisors, says the technology could be used on the Web to self-caption photos with location and content information, and in the future the technology could be used to identify cancer cells in a mammogram.

AI Recognizes Cats the Same Way Physicists Calculate the Cosmos
Wired News (12/15/14) Natalie Wolchover

Boston University researcher Pankaj Mehta and Northwestern University researcher David Schwab have demonstrated that a statistical technique called "renormalization," which enables physicists to accurately describe systems without knowing the exact state of all of their component parts, also enables artificial neural networks to categorize data in a video. "Extracting relevant features in the context of statistical physics and extracting relevant features in the context of deep learning are not just similar words, they are one and the same," says Emory University biophysicist Ilya Nemenman. In addition, strong similarities between deep learning and biological learning suggest the brain may also employ a form of renormalization to understand the world. Renormalization is "taking a really complicated system and distilling it down to the fundamental parts, and that's what deep neural networks are trying to do as well. And what brains are trying to do," Schwab says. The researchers' breakthrough came by focusing on a procedure called variational or "block-spin" renormalization, which involves grouping components of a system into larger and larger blocks, each an average of the components within it.

Stanford Team Combines Logic, Memory to Build a 'High-Rise' Chip
Stanford University (12/15/14) Tom Abate

Stanford University researchers say they have developed a method for creating high-rise chips that could outperform conventional single-story logic and memory chips. Their approach involves building layers of logic atop layers of memory to create a tightly interconnected high-rise chip. Many thousands of nanoscale electronic elevators would move data between the layers much faster, using less electricity, than the wires connecting existing single-story logic and memory chips. The researchers' innovation leverages new technology for creating transistors, a new type of computer memory that lends itself to multi-story fabrication, and a technique to build these logic and memory technologies into high-rise structures in a new way. "With further development, this architecture could lead to computing performance that is much, much greater than anything available today," says Stanford professor Subhasish Mitra. The researchers also demonstrated how to put logic and memory together into three-dimensional structures that can be mass produced. "With this new architecture, electronics manufacturers could put the power of a supercomputer in your hand," says Stanford professor Philip Wong.

Google Wants You to Help Design the Internet of Things
IDG News Service (12/12/14) Joab Jackson

Google plans to fund research that will help support the development of the Internet of Things (IoT), with the projects to be carried out over the course of a year. One set of awards will be for larger team projects that would be led by an academic or a graduate student "willing to dedicate a substantial portion of their research time to this expedition," according to the company's request for proposals. Google will provide grants of $500,000 to $800,000 for the projects. A smaller set of awards, ranging from $50,000 to $150,000, will be available for "new and unorthodox solutions" in user interface and application development, privacy and security, and systems and protocols research. The deadline for submitting proposals for new IoT technologies is Jan. 21, 2015, and Google will announce the winners in the following months. IoT is an area that Google continues to research, with the company looking to add IoT capabilities to several of its products. "While the [IoT] conjures a vision of 'anytime, any place' connectivity for all things, the realization is complex given the need to work across interconnected and heterogeneous systems, and the special considerations needed for security, privacy, and safety," says Google chief Internet evangelist Vint Cerf.

Colleges, Labs Develop STEM Core Curriculum
Lawrence Livermore National Laboratory (12/12/14) Kenneth K. Ma

The Lawrence Livermore National Laboratory (LLNL) is developing a core curriculum to prepare junior college students for technical jobs at California’s national labs. The initiative is being undertaken by a consortium of community colleges, national labs, and nonprofit educational institutes, and will emphasize science, technology, engineering, and mathematics (STEM) courses to prepare women, minorities, veterans, and other underserved populations for tech jobs. The consortium recently met at the Livermore Valley Open Campus to structure a common STEM educational standard for use by colleges as well as to form internships and other employment pipelines for LLNL and other local labs, including Lawrence Berkeley National Laboratory, the National Aeronautics and Space Administration's (NASA) Ames Research Center, and NASA’s Jet Propulsion Laboratory. "The national labs need a pipeline of talent to fill hundreds of technical positions in the next five years," says LLNL's Beth McCormick. John Colborn, director of the Aspen Institute's Skills for America's Future, who facilitated the STEM consortium's meeting, says, "We want to know that employers who are hiring those people are finding them as good or better than the folks they are able to get through ordinary means. And that these pools of candidates are more diverse and able to contribute to the diverse workforce that the labs are looking for."

Disney Research Builds Computer Models to Analyze Play in Pro Basketball and Soccer
EurekAlert (12/15/14) Jennifer Liu

Disney researchers used player-tracking data from more than 600 basketball games from the 2012-13 National Basketball Association (NBA) season to develop models that can make accurate in-game predictions of what each player is likely to do next in a game situation. In a separate study, Disney researchers collected more than 400 million data points from a professional soccer league to examine team behavior rather than individual players. The researchers showed they could accurately detect and visualize team formations well enough to identify teams based just on their style of play 70 percent of the time. Disney Research associate research scientist Patrick Lucey says these automated, data-driven methods can serve as tools for educating players in the limited time available for practice, or as tools for scouting opposition teams and planning for specific game situations. In the NBA study, the researchers used a machine-learning approach in which the models were trained based on the tendencies of each player to take shots or pass or receive passes in certain locations. In the soccer study, the researchers developed a "role-based" representation of teams that does not track individuals but instead automatically identifies the players in each position and how they play that position.

Want to Influence the World? Map Reveals the Best Languages to Speak
ScienceMag (12/15/2014) Michael Erard

A new technique for mapping the flow of information across the world identifies the best languages for achieving the optimum proliferation of ideas. The method is rooted in a Massachusetts Institute of Technology (MIT) student's master's thesis concerning the generation of global maps of transmission of information and ideas by multilingual people. A multi-university team met the challenge by characterizing three global language networks based on bilingual tweeters, book translations, and multilingual Wikipedia edits. For example, the book translation network plots out how many books are translated into other languages. Although the researchers learned English boasts the most transmissions to and from other languages in all three networks and forms the most central hub, the maps also illustrate "a halo of intermediate hubs" such as French, German, and Russian. Meanwhile, certain languages with large speaker populations, such as Hindi and Mandarin, are relatively isolated in these networks, with the implication that fewer communications in those tongues reach other language speakers. The researchers observe the users they studied, which they describe as elite by virtue of being literate and online, are not representative of all speakers of a language. However, MIT's Cesar Hidalgo notes "the elites of global languages have a disproportionate amount of power and responsibility, because they are tacitly shaping the way in which distant cultures see each other."

NASA Tests Software That May Help Increase Flight Efficiency, Decrease Aircraft Noise
NASA News (12/15/14) J.D. Harrington; Kathy Barnstorff

The U.S. National Aeronautics and Space Administration's (NASA) Airspace Systems Program has developed air traffic management software that could make flights more efficient. Airborne Spacing for Terminal Arrival Routes (ASTAR) assists pilots with precise spacing of planes by delivering specific speed information and guidance. Pilots will be able to take a "follow the leader" approach to their destination airport, which would help minimize deviations in their flight path and limit commercial flight delays. NASA partnered with Boeing to conduct flight tests of the software on the company's ecoDemonstrator 787 Test Airplane. "NASA has tested ASTAR in laboratory simulations, but this flight test on board the ecoDemonstrator 787 gave us the chance to see how well it works in a real-life flight environment," says NASA's Will Johnson. NASA will use the findings of the test flight to improve the software and then develop flight hardware with plans to test it further and eventually certify it for use. "ASTAR represents the first of several inventive technologies NASA's aeronautical innovators are working on that will be tested with the help of the ecoDemonstrator test airplanes," says NASA's Jaiwon Shin.

Pioneering Ulster University Research Transforms Stroke Patient Care
University of Ulster (12/15/14)

University of Ulster researchers are using data analysis to help improve patient care for people who have suffered a stroke. A team of computer scientists has examined data on 10,000 patients from facilities in the U.K. in an attempt to assess care strategies, where and how patients should be treated, and the best technologies to improve recovery. "By using unique data-modeling tools, we identified efficiencies and opportunities to improve care and quality of life," says Sally McClean, a professor at Ulster's Computer Science Research Institute. For example, the research found that increasing the use of clot-busting drugs helps lower long-term costs, boosts patient quality of life, and improves recovery. The data analysis also found that using larger facilities to centralize care in specialist centers is the best way to boost survival rates. "This research is already informing stroke health care practice across the U.K. and Europe," McClean says. "Our Computer Science Research Institute researchers are now sharing the research with other health care communities globally to help deliver more cost savings and optimize patient outcomes."

Scientists Put Worm's 'Brain' Into Lego Robot's Body
New York Daily News (12/15/14) Lee Moran

Researchers from the OpenWorm project have successfully coded a worm's brain and placed it in a robot made of Legos, enabling the device to move similarly to how a worm does. The researchers first mapped the neural connections of parasitic roundworms, translated that information into software, and downloaded the code to a Lego robot. "If we cannot build a computer model of a worm, the most studied organism in all of biology, we don't stand a chance to understand something as complex as the human brain," the researchers say. The research eventually could lead the way to more effective treatment or possible cures for diseases such as Parkinson's and Alzheimer's. The OpenWorm project opted to use the roundworm because of its simple structure of just 1,000 cells, of which just 302 comprise neurons. The roundworm also has just 7,000 connections or synapses, and its transparent skin enables scientists to track each individual cell by injecting a dye. The OpenWorm project is a global collaborative and open source effort, which includes researchers from the U.S., U.K., and other nations. The project hopes to completely replicate the entire nematode worm Caenorhabditis elegans, not only its brain.

Abstract News © Copyright 2014 INFORMATION, INC.
Powered by Information, Inc.

To submit feedback about ACM TechNews, contact:
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe