Association for Computing Machinery
Welcome to the January 3, 2014 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


NSA Seeks to Build Quantum Computer That Could Crack Most Types of Encryption
The Washington Post (01/03/14) Steven Rich; Barton Gellman

The U.S. National Security Agency (NSA) is trying to develop a quantum computer that could be used to crack almost any type of encryption currently in use, according to documents released by former NSA contractor Edward Snowden. The documents say the initiative is part of a $79.7-million research program called "Penetrating Hard Targets," which is developing technology that potentially could be used to infiltrate all current forms of public key encryption. The documents do not discuss the full extent of NSA's research into quantum computing, although they do suggest the agency is no closer to building a quantum computer than the European Union and Switzerland, both of which are carrying out similar research efforts. "It seems improbable that the NSA could be that far ahead of the open world without anybody knowing it," says Massachusetts Institute of Technology professor Scott Aaronson. University of Manchester professor Jeff Forshaw says the NSA is likely at least five years away from building a quantum computer, and possibly much more if no significant breakthroughs are made. However, once completed, the computer could be used to crack almost every type of encryption used to protect state secrets and other sensitive information, such as 1,024-bit RSA encryption keys, which would take hundreds of standard computers working together about 2,000 years to crack.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Viewing Where the Internet Goes
The New York Times (12/30/13) John Markoff

In an interview, Internet pioneers Vinton Cerf and Robert Kahn discuss the future of Internet regulation. Governments that oppose the free flow of information are increasingly demanding changes in Internet governance in reaction to the ongoing leaks about U.S. National Security Agency surveillance by former contractor employee Edward Snowden. Governments worldwide are divided over the issue of whether Internet governance should fall to the Internet Corporation for Assigned Names and Numbers (ICANN) or the International Telecommunication Union (ITU). Cerf, a former ICANN chairman and current ACM president, is an informal "Internet ambassador" who favors an independent Internet free from state control. Cerf supports the principle of network neutrality, which holds that Internet service providers should enable equal access to all content and applications from all sources. Although Cerf does not view Snowden's revelations as a significant threat to an open, global Internet, he says they might increase interest in end-to-end cryptography. In addition, Cerf believes "that anonymity is an important capacity, that people should have the ability to speak anonymously," but adds that "strong authentication is necessary." Kahn has cooperated with the ITU on the development of new network standards, but says, "no matter what you do, any country in the world is going to have the ability to set its own rules internally."
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


An AI Chip to Help Computers Understand Images
Technology Review (01/02/14) Tom Simonite

Purdue University researchers are developing a chip designed to enable mobile processors to make use of deep learning, which could lead to mobile devices that can understand the content of images and video. The researchers have demonstrated that a co-processor connected to a conventional smartphone processor could help it run deep learning software. The software was able to detect faces or label parts of a street scene. Although the system is not as powerful as other systems, it shows how new forms of hardware could make it possible to use the power of deep learning more widely. "You probably have a collection of several thousand images that you never look at again, and we don't have a good technology to analyze all this content," says Purdue professor Eugenio Culurciello. Deep learning software works by filtering data through a hierarchical, multilayered network of simulated neurons that are individually simple but can exhibit complex behavior when linked together. The Purdue system is designed to run multilayered neural networks above all else and to put them to work on streaming imagery. During testing, the researchers say their prototype was about 15 times as efficient as using a graphics processor for the same task.


In Exascale, Japan Stands Apart With Firm Delivery Plan
Computerworld (12/30/13) Patrick Thibodeau

Japan announced that it plans to deliver an exascale supercomputer in six years, making it the first country to set a specific date for developing a next-generation exascale system. The Riken Advanced Institute for Computational Science announced that it will lead Japan's exascale program, with the "successful development of the exascale supercomputer scheduled for 2020." Meanwhile, the United States is aiming for an "early 2020s" delivery of an exascale system, according to a Department of Energy official. In December, Congress approved a fiscal 2014 defense budget bill that requires developing an exascale system by 2024. In addition, European researchers are developing an ARM-based exascale system with a tentative goal of 2020, while China is thought to be aiming for 2018-2020 timeframe for exascale delivery. An exascale system can execute a quintillion floating-point operations per second, and it tops a single petaflop system's computing speed by about 1,000-fold. The fastest systems in use today are well below 50 petaflops. Meanwhile, quantum computing also is turning into a race among developed nations. Britain is investing $444 million in quantum computing over the next five years, while quantum computing work is underway at several U.S. federal research facilities.


Tech Renegade: From Print-at-Home Guns to Untraceable Currency
The Wall Street Journal (01/02/14) Danny Yadron

Entrepreneur Cody Wilson is readying the launch of Dark Wallet, software designed to make Bitcoin financial transactions untraceable, but technology regulators are worried could be used for illicit activity. Currently, all transactions made with any software linked to Bitcoin are automatically posted in a public ledger known as the block chain, which enables authorities to monitor a person's transactions if they can ascertain the user's unique Bitcoin address. Wilson wants to make Bitcoin anonymous through Dark Wallet. The software, when affixed to a browser, will attempt to scramble the strings of letters and numbers that constitute Bitcoins as the virtual currency is spent. Dark Wallet is slated for release early this year, and will be available for free. Sources report that if Dark Wallet is widely adopted by Bitcoin users, then enforcement officials will face greater difficulty in tracking the flow of e-commerce. Bitcoin Foundation executive director Jon Matonis says Wilson's project is aligned with his own effort to promote and develop Bitcoin as a private, government-free currency. Wilson says he has raised about $250,000 for Dark Wallet's development, most of it tendered in Bitcoins.
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Supercomputers: New Software Needed
InformationWeek (12/31/13) Patience Wait

Software now presents the greatest challenge in supercomputing due to the need for code that matches processing capability. The top spot in the most recent ranking of supercomputers in November went to China's National University of Defense Technology's Tianhe-2 supercomputer, which reached a benchmark speed of 33.86 petaflops/second. The next goal is exascale computing with speeds of a million trillion calculations per second, which Argonne National Laboratory's Mike Papka says is possible by 2020. One hurdle to building faster supercomputers is creating an operating system that can manage that many calculations per second. Argonne and two other national laboratories are addressing this challenge with the Argo project. High-performance computing also increasingly is focused on applicability to other technology developments such as big data and analytics, open systems as opposed to proprietary systems, and energy efficiency as a requirement, says Brocade's Tony Celeste. "What matters is not acquiring the iron, but being able to run code that matters," says DRC's Rajiv Bendale. "Rather than increasing the push to parallelize codes, the effort is on efficient use of codes."


App Inventor Launches Second Iteration
MIT News (12/30/13) Andrew Clark

The MIT App Inventor is the basis for more than 3 million projects, and its second iteration was released on Dec. 6 in conjunction with Computer Science Education Week. The App Inventor is a joint effort of the Massachusetts Institute of Technology's Media Lab and the Computer Science and Artificial Intelligence Laboratory, and it enables anyone to construct an Android phone app using a Web browser and either a connected phone or an emulator. "It's huge for anyone with a smartphone who has wanted to use some app but has not been able to find it," notes the MIT Center for Mobile Learning's Josh Sheldon. App Inventor 2 trumps the previous version by being completely operable from the browser, and no longer requiring users to install and run a Java file. Whereas App Inventor 1 currently has 1.3 million users who have built 3.2 million apps with it, App Inventor 2 has 100,000 users who have built 140,000 apps, according to the Center for Mobile Learning's Hal Abelson. He notes the standard approach for teaching computer science has not changed in about 30 years, but it has little relevance with today's kids. "They use their cellphones to communicate and for social networking," Abelson says. "You have to think about how to create a relatable experience for them."


Can Robots Better Spot Terrorists at Airports?
The Wall Street Journal (12/30/13) Jack Nicas

Aviation and government authorities are beginning to use biometric technology to automate immigration and ID checks at airport security checkpoints and boarding gates. Biometric technology could "get rid of the boarding pass completely," says London Gatwick Airport CIO Michael Ibbitson. "We're only just starting to see what biometrics can do." This year Gatwick conducted a pilot in which it processed 3,000 British Airways fliers using iris scans upon check-in, which allowed cameras at security checkpoints and boarding gates to automatically recognize passengers. About 28 percent of airports worldwide currently use biometric technology, compared with 18 percent in 2008, according to a SITA survey. The United States uses biometrics for airport border control, and U.S. Customs and Border Protection trusted-traveler programs allow almost 2 million frequent fliers to scan their fingerprints instead of talking to an immigration officer to re-enter the country. Facial-recognition scanners can correctly approve nearly 98 percent of travelers, allowing an average of one in 1,000 impostors to pass, according to a 2010 test at Amsterdam Airport Schiphol. Gatwick Airport also is using facial-recognition software to estimate wait times at security and immigration checkpoints by capturing passengers' faces as they approach and depart checkpoints.
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Reading Your Palm for Security's Sake
The New York Times (12/29/13) Anne Eisenberg

Biometric systems are penetrating the consumer mainstream as identity authentication tools for applications that include library access and laptop, smartphone, and payment security. Biometric devices can recognize vein patterns in the finger, the back of the hand, or the palm, notes Michigan State University professor Anil K. Jain. He points out that the technology makes forgery problematic "because the vascular patterns are inside the body." Jain also says momentum for palm scanning beats all other vein-reading techniques, and identifying features include vein thickness, and the angles and sites where they intersect. Another biometric method is voice printing, which uses dozens of vocal properties such as pitch and accent to confirm identity. Nuance Communications' Brett Beranek says the theft of voice prints would do criminals no good, as "we are not storing people's voices, but characteristics of their voice." Jain cautions that biometric technologies are not foolproof, noting that mismatches and identity confusion will undoubtedly lead to errors. Meanwhile, Goode Intelligence founder Alan Goode projects that fingerprint sensing will be the most commonly used biometric identifier for the next several years. However, Frost & Sullivan analyst Ram Ravi says palm reading, iris-, and facial-identification biometrics are all experiencing rapid market expansion. Security expert Bruce Schneier says biometric solutions could be appealing for consumer goods provided no processing occurs outside the device.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Tech Companies Work to Combat Computer Science Education Gap
U.S. News & World Report (12/27/13) Allie Bidwell

Concern is mounting among educators and industry professionals that there will be a pronounced shortage of qualified employees to fill a swelling number of computer science jobs, stemming from a lack of educational opportunities for students. Ninety percent of U.S. high schools lack computer science classes, while in 33 states such classes do not count toward high school math or science graduation requirements, according to Code.org. To address this problem, Microsoft and other technology firms have deployed programs to interest more students in computer science at a younger age. Microsoft's Technology Education and Literacy in Schools (TEALS) program matches 70 schools in 12 states with about 300 professional software engineers who volunteer to help initiate computer science programs or build on existing programs. "We've started to recognize that [interest in computer science] starts much earlier [than college]," says Microsoft's Lori Harnick. "If you haven't been exposed to computer science until university, it probably won't be a field you choose to pursue." TEALS volunteer Dan Kasun emphasizes the importance of showing students actual, non-tech careers they can pursue in diverse fields such as cybersecurity and data analytics. "The thing that you want to do and the jobs you want to pursue, in whatever industry, software is a big piece," Kasun tells students.


Google: Compute Engine Plays Nice With Open Source
eWeek (12/24/13) Todd R. Weiss

Google is working to educate developers about the compatibility of its Compute Engine platform with a variety of open source applications. "Google provides a general set of client APIs for accessing Compute Engine, as well as other Google services," notes Google's Eric Johnson. "However, you may have code or applications written against another language API that makes updating to Google's client APIs questionable." In these situations, open source applications such as Ruby, Python, and Java can be considered with Google Compute Engine, Johnson notes. To automate configuration management of their Compute Engine instances, developers can use configuration management tools. In addition, several open source projects support Compute Engine, such as CoreOS and Docker, which enable the use of Linux containers. In the past few months, Google has sought to help developers find new capabilities and uses for Google Compute Engine. Over the summer, Google announced new developer tools for its key cloud products, and earlier in the year Google opened its Google Maps Engine API to developers and announced a new Mobile Backend Starter that automates the back end of apps development.


Highly Integrated Wireless Sensing for Body Area Network Applications
SPIE (12/23/13) Vincent Peiris

The melding of highly integrated radio systems-on-chips, ultra-low-power processing, and smart textiles makes body area networks (BANs) and new human-machine interface (HMI) applications possible. The European Union-funded Wear-a-BAN project is targeting the development of unique implementation concepts that will advance new uses of BAN and HMI technologies. A key goal is combining radio integrated circuits and modules with wearable antennas, and the central challenge was ensuring that the antennas were big enough to send signals even in complex propagation conditions while also being small and unobtrusive to make the wearer comfortable. The solution was a smart textile system using a flexible, conductive textile-based antenna. Also challenging was the miniaturization of the electrical functions for achieving a BAN node, which was facilitated via a dedicated system-on-chip that offers an optimal deployment of custom features. The networking software element involved development and optimization of Batmac, an energy-efficient BAN-oriented communication protocol for targeted BAN applications. Batmac provides self-organizing, adaptive, and flexible media access control protocol features that automatically sense the signal-reducing shadowing effect and rapidly adjust the relaying scheduling to BAN changes associated with close-to-the-body implementation of sensor nodes. The researchers demonstrated the benefits of this dedicated and customized approach within applications such as energy expenditure monitoring, human-to-gaming interfaces, and HMIs.


Abstract News © Copyright 2014 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe