Association for Computing Machinery
Welcome to the June 25, 2014 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Microsoft Makes Bet Quantum Computing Is Next Breakthrough
The New York Times (06/23/14) John Markoff

Microsoft is exploring a new approach to quantum computing based on braiding particles called anyons to form the building blocks of a supercomputer. Anyons are described as quasiparticles that exist in only two dimensions rather than three. Microsoft's "topological quantum computing" involves precisely controlling the motions of pairs of subatomic particles as they braid themselves around one another to control entangled quantum bits. This subatomic particle braiding process would in theory weave together a powerful computing system, with the mathematics of particle motions capable of correcting the most elusive errors facing quantum computer designers. However, scientists believe this topological approach is risky, because the type of anyon particle needed to create qubits has not been conclusively proven to exist. To address this, Microsoft is investing in academic research groups exploring a long-hypothesized class of subatomic particles known as Majorana fermions. If the existence of the Majorana can be proven, the particles could likely be used to generate qubits for topological quantum computing. The most compelling evidence the particles exist was presented in 2012 through Microsoft-backed research at the Delft University of Technology in the Netherlands. While acknowledging they have not yet produced a working prototype of the basic element of their system, Microsoft researchers are considering how to build a prototype if efforts to create the qubits succeed.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Google Glass Snoopers Can Steal Your Passcode With a Glance
Wired News (06/24/14) Andy Greenberg

University of Massachusetts (UMass) Lowell researchers have developed software that uses video from wearable devices such as Google Glass and smartwatches to read four-digit PIN codes typed onto an iPad from almost 10 feet away, and from almost 150 feet with a high-definition camcorder. The software involves a custom-coded video-recognition algorithm that tracks the shadows from finger taps and could recognize the codes even when the video did not capture any images on the target devices' displays. "I think of this as a kind of alert about Google Glass, smartwatches, all these devices,” says UMass Lowell professor Xinwen Fu. "If someone can take a video of you typing on the screen, you lose everything." The researchers found that Google Glass identified the four-digit PIN from three meters away with 83 percent accuracy, while webcam video revealed the code 92 percent of the time. The software also can identify passcodes even when the screen is unreadable based on the iPad's geometry and the position of the user's fingers. The software maps an image of the angled iPad onto a "reference" image of the device, then looks for the abrupt down and up movements of the dark crescents that represent the fingers' shadows. Fu plans to present the findings with his students at the Black Hat security conference in August.


Google Will Finance Carnegie Mellon's MOOC Research
The Chronicle of Higher Education (06/24/14) Avi Wolfman-Arent

Google is donating $300,000 in each of the next two years to Carnegie Mellon University (CMU) to study massive open online courses (MOOCs). CMU researchers will use the grant, which is part of the Google Focused Research Award program, to focus on data driven approaches to research on MOOCs, including techniques for automatically analyzing and providing feedback on student work, with the goal of developing platforms intelligent enough to mimic the traditional classroom experience. "Unless the MOOCs pay attention to how people actually learn, they will not be able to improve effectiveness, and will end up as just a passing fad," says CMU's Justine Cassell, associate vice provost for technology strategy and impact. Google's investment signals an interest in the potential of online education. "We believe this research will make online courses much more engaging and benefit both students and educators around the world," says Google vice president for research and special initiatives Alfred Spector.


U.S. Intelligence Agency Wants Brain-Like Algorithms for Complex Information Processing
Network World (06/23/14) Michael Cooney

The U.S. Office of the Director of National Intelligence (ODNI) wants to develop technology that will enable computers to think like humans. ODNI's Intelligence Advanced Research Projects Activity (IARPA) believes algorithms that utilize the same data representations, transformations, and learning rules as those employed by the brain have the potential to revolutionize machine intelligence. IARPA will explain the project, known as Machine Intelligence From Cortical Networks (MICrONS), at its Proposers Day, scheduled for July 17, in College Park, MD. IARPA says the goal is to create "a new generation of machine-learning algorithms derived from high-fidelity representations of the brain's cortical microcircuits to achieve human-like performance on complex information processing tasks." IARPA wants researchers to propose an algorithmic framework for information processing that is consistent with existing neuroscience data. Researchers will collect and analyze high-resolution data on the structure and function of cortical microcircuits, and generate computational neural models of cortical microcircuits. IARPA also wants researchers to implement novel machine-learning algorithms that use mathematical abstractions of cortical computing as their basis of operation.


Researchers Unveil Experimental 36-Core Chip
MIT News (06/23/14) Larry Hardesty

Massachusetts Institute of Technology (MIT) researchers have developed a 36-core chip, featuring a network-on-chip, that is able to maintain cache coherence, meaning it ensures the cores' locally stored copies of globally accessible data remain up to date. In a network-on-chip, each core is connected only to those immediately adjacent to it. As a standard chip performs computations, it updates the data in its cache, and every so often it ships the data back to main memory, which is very time-consuming. However, in a network-on-chip, data is moving everywhere, and packets will frequently arrive at different cores in different sequences. The MIT researchers solve this problem by equipping their chips with a second network, which shadows the first. The circuits connected to the second network declare their associated cores have sent requests for data over the main network. Since those declarations are simple, nodes in the shadow network can combine them and pass them out without producing delays. "One of the challenges in academia is convincing industry that our ideas are practical and useful," says University of Michigan professor Todd Austin. "[The MIT researchers have] really taken the best approach to demonstrating that, in that they've built a working chip."


Growing Call for Anonymity Online, Says Cambridge Researcher
ComputerWeekly.com (06/20/14) Warwick Ashford

Internet users can be easily tracked all the time, but a growing community of users, from military and law enforcement officers to journalists, human rights workers, and political activists are turning to anonymous Internet communications, says Steven Murdoch, a Royal Society research fellow in the Cambridge University computer lab. He notes strong privacy also is important for applications such as electronic voting and online healthcare. The Tor project, originally developed by the U.S. Navy to protect government communications, is the most widely used open system for anonymity online. "In recent years, there have been dramatic changes in how anonymous communication systems have been built and how they have been used," Murdoch says. "This includes the Web taking over from email as the major means of communications and users of anonymous communication systems prioritizing censorship-resistance over privacy." He also notes commercial and political realities are affecting how projects such as Tor are run and software is designed. Anonymous communication systems will have to adapt to changing circumstances and try to prevent any malicious use, according to Murdoch. He will speak on the topic at AppSec Europe, which takes place this week at Anglia Ruskin University in Cambridge.


The Quest for Ethical Algorithms
CIO UK Magazine (06/23/14) Mark Say

Imperial College Intelligent Systems and Networks Group deputy head Jeremy Pitt is working to endow algorithms with a sense of ethics to address major societal problems. His work is predicated on eight design principles for the self-governance of common resources put forth by Nobel Prize-winning political and economic scientist Elinor Ostrom. Shared resources, such as water, can be preserved using rules by which people regulate their own behavior for the common good, Ostrom says. Ostrom's principles include defined boundaries, accessible conflict resolution mechanisms, effective monitoring, and graduated sanctions. As cognitive computing systems raise concerns about removing human sensibilities from decision-making, Pitt believes algorithms can be developed with ethical capabilities. Artificial-intelligence techniques have aided Pitt's work over the past three years, as he tries to convert Ostrom's principles from natural language into protocols represented in event calculus, a computer language for reasoning about actions and events. Pitt's research team now has a version of event calculus that is compatible computationally with all existing event-recognition systems, and capable of processing tens of thousands of events per second. Eventually, the team hopes to create algorithms capable of making ethical decisions in areas such as resource allocation and dispute mediation.


Who's Your Daddy? UCF Students Program Computer to Find Out
UCF Today (06/19/14) Barbara Abney

University of Central Florida (UCF) researchers are developing a facial recognition program designed to rapidly match pictures of children with their biological parents and identify photos of missing children as they age. "As this tool is developed I could see it being used to identify long-time missing children as they mature," says UCF professor Ross Wolf. Although facial recognition technology is already heavily used by law enforcement, it has not been developed to the point where it can identify the same characteristics in photos over time. Wolf says the new technology could possess this capability. Graduate student Afshin Dehghan and a team from UCF's Center for Research in Computer Vision launched the project with more than 10,000 online images of well-known people and their children. The program first converts the photos into a checkerboard of patches and extracts tiny snapshots of the most significant facial parts. The program then compares all the photos feature by feature and sorts them by the most probable match. During testing, the researchers found the program did a better job of matching features of parents and their children than random chance, and it also outperformed existing software for identifying relatives through photos by 3 to 10 percent.


Code Review: Groundbreaking Work on Data Mining Version Histories
Saarland University (06/23/14)

In 2004, Saarbrucken University professor Andreas Zeller developed a technique that automatically issues suggestions on how to manage changes in software, based on the program's version history. The work led to further research into automated version history analysis, a field currently involving about 150 researchers from all over the world. In conjunction with error databases, the Saarbrucken researchers could anticipate possible error sources within the Microsoft Windows Vista. At the time, they were capable of tracing these issues to insufficient team structures. Microsoft currently maintains a research department of its own, in which staff members are responsible for the systematic review of error and version histories and for inferring suggestions from these archives. The Saarbrucken researchers are now using the same method for data mining, automatically extracting information from huge amounts of data. The researchers developed Chabada, software that found 81 percent of the 22,521 existing spy applications in the Google Play store without having to know their behavioral patterns. Google was impressed with the software and invited the researchers to visit the Google Security Research center to install the automated suggestion program.


Statistical Tricks Extract Sensitive Data From Encrypted Communications
Technology Review (06/19/14) Tom Simonite

Researchers at the University of California, Berkeley, and Intel have discovered that private information can be obtained from encrypted communications using a technique known as traffic analysis to find patterns in the data stream. The team's technique targets the HTTPS encryption that guards websites and relies on machine-learning algorithms to learn traffic patterns associated with different pages. Matching patterns can then be sought in a target's traffic trace. Using this technique, the researchers could identify the pages for specific medical conditions on the Planned Parenthood and Mayo Clinic websites even though both sites use HTTPS encryption. Financial information is similarly identifiable. The method averages approximately 90-percent accuracy at identifying Web pages. Researcher Scott Coull at security company RedJack recently discovered that traffic analysis can be highly effective with Apple's iMessage, which encrypts messages during the entire transmission. Using traffic analysis, Coull could identify with at least 96-percent accuracy information such as when users started or stopped typing, were sending or opening a message, and the language in which a message was written. Traffic analysis can be a legitimate tool for businesses to improve targeted advertising, but also could be used for government surveillance programs.


COSMOS Supercomputing Research Facility Becomes an Intel Parallel Computing Center
University of Cambridge (06/18/14)

Intel has named the University of Cambridge's COSMOS supercomputer one of its Parallel Computer Centers. The company collaborates with top universities, institutions, and research labs on modernizing applications in order to increase parallelism and scalability through optimizations that leverage cores, caches, threads, and vector capabilities of microprocessors and coprocessors. Intel will provide the COSMOS facility with enhanced support from its applications and engineering teams, as well as access to future Intel Xeon Phi and other Intel products for high-performance computing. Located in the Stephen Hawking Center for Theoretical Cosmology, COSMOS is used for research in cosmology, astrophysics, and particle physics. The supercomputer was switched on to help explore the origins of the universe in 2012. COSMOS is based on SGI UV2000 systems with 1,856 cores of Intel Xeon processors, 14.8 TB of RAM, and 31 Intel Xeon Phi coprocessors. When operating at peak performance, COSMOS can perform 38.6 trillion calculations per second. "Intel Parallel Computing Centers are collaborations to modernize key applications to unlock performance gains that come through parallelism, enabling the way for the next leap in discovery," says Intel's Stephen Gillich.


USENIX: Unstable Code Can Lead to Security Vulnerabilities
IDG News Service (06/19/14) Joab Jackson

Programs written in a variety of languages, particularly C and other low-level system languages, may contain security vulnerabilities their programmers are unaware of, according to researchers at the Massachusetts Institute of Technology (MIT). Researcher Xi Wang noted in a presentation at the recent USENIX technical conference that some compilers can eliminate sections of code for certain functions when they optimize the code for an entire program. Wang says these compilers eliminate code that is deemed "unstable," including code for subroutines that are never called and code for functions that fall outside normal programming behavior. For example, a compiler may eliminate code for a buffer overflow protection function if it is designed to check a boundary of memory that is larger than what is allocated for the program. Compilers may display warning messages when they drop such code, although such warnings generally go unnoticed by programmers, meaning they may be unaware when their compilers delete code written for security functions. MIT researchers are calling on compiler makers to address this issue by updating their products to include a technique they developed that identifies unstable C and C++ code. Until then, Wang says programmers should be careful about optimizing their code, particularly if they believe portions of the code for a program cannot be safely eliminated.


The Secret Formula to Successful Online Dating
The Conversation (06/18/14)

Computer scientists at the University of Massachusetts Lowell have studied the behavior of 200,000 people on Chinese dating site Baihe.com to gain insight into how successful matches are made. Although predictable in many ways, people also break their own stated dating requirements. The researchers found 70 percent of communications from women and 55 percent from men were sent to people who did not meet their stated preferences. Machine learning helped the researchers determine with 75-percent accuracy whether a user would reply to another's initial contact message. Their methods factored in characteristics such as age, height, location, income, and education level as well as activity, popularity on the site, and compatibility in preferences and attractiveness. Unlike a book recommendation, which is a single directional process, dating sites must match users who have a mutual interest. The researchers used collaborative-filtering algorithms to develop a reciprocal recommendation system to match users of mutual interest. Collaborative filtering-style algorithms based on prior communications of the user and those with similar interests and attractiveness proved significantly more successful than content-based algorithms that rely on factors such as age and income. Because users often make dating choices that differ from their reported preferences, the collaborative-filtering algorithm that factors in actual user behavior is an effective matching tool.


Abstract News © Copyright 2014 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe