Association for Computing Machinery
Welcome to the June 12, 2013 edition of ACM TechNews, providing timely information for IT professionals three times a week.

ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Tech Companies Urge U.S. to Ease Secrecy Rules on National Security Probes
Washington Post (06/11/13) Craig Timberg; Cecilia Kang

Several technology companies are calling on U.S. officials to reduce the secrecy surrounding national security investigations and remove long-standing gag orders covering the nature and extent of information collected about Internet users. The Senate Intelligence Committee recently asked the National Security Agency (NSA) to publicly explain programs that use telephone and Internet records. Meanwhile, Google recently published an open letter to U.S. attorney general Eric Holder and Federal Bureau of Investigation director Robert Mueller requesting the right to publicly report the numbers and scope of national security data requests. That, and similar moves by other major technology companies, aim to recast the firms as defenders of user privacy, instead of willing participants in surveillance. "If these companies can’t be transparent with users about their participation in surveillance with the U.S. government, they will lose a lot of business," notes the Electronic Frontier Foundation's (EFF) Peter Eckersley. One company not mentioned as a participant in NSA's PRISM surveillance program was Twitter, which EFF ranks as the most protective of user information among major Internet companies. In Europe, where officials are developing a strict new data privacy law, the reaction to the NSA surveillance program has been harsh.


1st Web Page Proves as Elusive as Mysteries of the Universe
Associated Press (06/11/13)

European Organization for Nuclear Research (CERN) scientists are searching for the first Web page. However, they may never make a clear-cut discovery of the original Web page because of the nature of how data is stored, according to CERN researcher Dan Noyes. "The concept of the earliest Web page is kind of strange,” Noyes says. "Data gets overwritten and looped around. To some extent, it is futile.” In April, CERN restored a 1992 copy of the first-ever website that Tim Berners-Lee created to arrange CERN-related information. Then University of North Carolina-Chapel Hill professor Paul Jones came forward with a 1991 version of the same site, which Jones has kept at an archive. However, the 1991 page is locked in a NeXT computer, behind a password that has long been forgotten. Forensic computer specialists are currently trying to extract the information to check time-stamps and preserve the original coding used to generate the page. "No matter how perfectly you can reproduce something, like The Scream or the Mona Lisa, we have a fetish for the original," Jones says.


What China's Supercomputing Push Means for the U.S.
Computerworld (06/10/13) Patrick Thibodeau

In an interview, U.S. Department of Energy's Argonne National Laboratory researcher Peter Beckman recently discussed China's emphasis on supercomputing and the power problem facing the goal of exascale computing. China's new Tianhe-2 supercomputing system, which has a theoretical speed of nearly 55 petaflops and may be officially cited as the world's fastest in the next release of the Top 500 global rankings, is a very clear statement of how serious they are with respect to scientific computing, according to Beckman. Although Tianhe-2 runs on Chinese-made interconnects and software, it also uses U.S.-made chips. However, China's next system will probably have Chinese-made central-processing units, Beckman notes. In addition, although there is international cooperation in developing a software stack for an exascale system, he says the Chinese want to build their own components, and they are not sharing any of their research as open source. The Tianhe-2 will use 24 megawatts (MW) of electricity at its peak when cooling is considered. The goal for exascale is in the 20-30 MW range, according to Beckman. One strategy to lower power is to integrate memory on the chip. Another method is the use of NVRAM, which is not as fast as RAM but uses much less power.


Securing the Cloud
MIT News (06/10/13) Larry Hardesty

Researchers at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory have created an encryption algorithm that could improve security for cloud computing technology. The researchers say a new type of cryptography, homomorphic encryption, could secure cloud computing by enabling users to send encrypted data to a cloud server, which would then process the data without decryption and send back a version that was still encrypted. Until now, a major flaw with homomorphic encryption is that if a user sends a search term to a server for a specific record, the server would have to send back information on every record in the database. The MIT team has solved this by creating a functional encryption scheme that combines several existing schemes, starting with homomorphic encryption and embedding the decryption algorithm in a garbled circuit that allows only the holder of a cryptographic key to encrypt data. "Our result is in some sense the first result showing that you can do this very generally," says MIT professor Shafi Goldwasser, who together with professor Silvio Micali are the most recent recipients of ACM's A.M. Turing Award. The researchers recently presented their work at ACM's 45th Symposium on the Theory of Computing.


Wearable Computing Pioneer Steve Mann: Who Watches the Watchmen?
TechHive (06/06/13) Armando Rodriguez

Steve Mann has been developing wearable technology for the past 30 years, adapting computers, screens, and optics into wearable devices. Long before the arrival of Google Glass, Mann created a prototype similar to Glass with a glass prism over the user’s eye, with the entire device attached to a helmet and running on a 9-volt battery. In terms of Glass, Mann says, "I don’t think they got it right. I think it’s a generation-one glass, and we’re at generation five now. Glass strains your eye and your optic nerve because you’re always looking above the eye. The generation-two glass, which the eye itself is the camera, gets rid of those problems.” Mann wears the fourth-generation Eye digital eyeglasses he created, which are directly attached to his head and require special tools to remove. The fifth-generation Eye, which Mann is now developing, will incorporate a second camera and support three-dimensional augmented reality. Mann disagrees with characterizations of himself as the “world’s first cyborg,” a word he believes is vague, and prefers the term "augmediated." "We can augment or diminish or modify or otherwise mediate our surroundings, and the computer serves as an intermediary between the real world and ourselves," he says.


Dust Storms Put GPU CPU Performance to the Test
HPC Wire (06/06/13)

Researchers at the Center for Intelligent Spatial Computing and the University of Denver are trying to harness both central processing units (CPUs) and graphical processing units (GPUs) together to accelerate a sample geovisualization process using dust storms as the subject. By visualizing the storms, the researchers developed a 3D/4D framework for geovisualization that includes preprocessing, reprojection, interpolation, and rendering. The researchers also compared the performance differences between GPUs and CPUs, and found that multicore CPUs and manycore GPUs can improve the efficiency of calculations and rendering using multithreading techniques. In addition, the researchers found that given the same amount of data, when increasing the size of blocks of GPUs for a coordinate transformation, the executing time of the interpolation and rendering is consistently reduced after hitting a peak. The researchers say the best performance results obtained by GPU implementations in all three major processes are usually faster than CPU-based implementations. They also note the GPU's onboard memory constrains the capabilities of processing large volume data, so they need to do preprocessing on the CPU. Still, the project's efficiency was held back by the high latency of the data streaming between the GPU and CPU.


The Next Big Thing in Tech: Augmented Reality
CNet (06/07/13) Dan Farber

Augmented reality technologies have been in development in university labs and small companies for almost 50 years, and emerging consumer products, such as Google Glass, are drawing attention to wearable electronics. However, the wearable revolution started with University of Utah computer scientist Ivan Sutherland, who in 1965 first described a head-mounted display that enabled the user to see a virtual world superimposed on the real world. Sutherland's work was advanced by the University of Toronto's Steve Mann and Columbia University's Steven Feiner, and now the technology is finally catching up with their concepts. "You need to have technology that is sufficiently comfortable and usable, and a set of potential adopters who would be comfortable wearing the technology," Feiner says. Required augmented reality components such as cameras, computers, sensors, and connectivity are shrinking in size and price and ramping up in speed, accuracy, and resolution to a point where wearable computers will be seen as a cool accessory, mediating people's engagement with analog and digital environments. Feiner says when such technology is "very small and comfortable, you don't feel weird, but cool."


Laws of Physics Say Quantum Cryptography Is Unhackable. It’s Not
Wired News (06/07/13) Adam Mann

Quantum cryptography can theoretically encrypt a message in a way that would make it impossible for unintended viewers to access, but in reality machine errors and other factors mean that even quantum cryptography systems can fail. “If you build it correctly, no hacker can hack the system," says Zurich's Institute of Theoretical Physics physicist Renato Renner. "The question is what it means to build it correctly.” In quantum cryptography, the key is encrypted into a series of photons that is transmitted between two parties sharing information. According to the Heisenberg Uncertainty Principle, an interloper cannot look at these photons without changing or destroying them. However, weaknesses exist, such as the fact that hackers can blind a detector with a strong pulse so that the photons cannot be seen. In addition, it is possible that the laser generating the photons will make a photon with confidential information and then a second photon with that same information, opening the possibility that hackers could access the second photon without being detected. Renner is developing cryptographic principles that would allow a high measure of security regardless of technological limitations, such as entangling two photons or purposely sending multiple photons to see whether one is stolen.


Eradicating Malaria, With the Tools at Hand
Pittsburgh Supercomputing Center (06/06/13)

The Vector Ecology and Control Network (VECNet) aims to eradicate malaria by combining and supporting the ingenuity of researchers, engineers, public health officials, national decision makers, and funding agencies. VECNet works by creating a single tool that enables stakeholders to test their ideas in a worldwide simulation of the disease. Researchers at the Pittsburgh Supercomputing Center and the University of Notre Dame will develop the VECNet cyberinfrastructure over the next year. "What we’re attempting to do with the VECNet project is to create a way to simplify sifting through the data to allow less technologically sophisticated users to contribute," says VECNet principal investigator Tom Burkot. The system will enable researchers to test how a newly identified strain of insecticide-resistant malaria-carrying mosquitoes is likely to spread and affect disease prevalence. "The VECNet project will host data archives about the transmission of malaria and computer models that predict the effects of different interventions on the course and spread of the disease," says Notre Dame researcher Gregory Madey.


Making Sense of Patterns in the Twitterverse
Pacific Northwest National Laboratory (06/06/13) Tom Rickey

Pacific Northwest National Laboratory (PNNL) data scientist Court Corley has developed SociAL Sensor Analytics (SALSA), a social media analysis tool that analyzes billions of social media messages in a matter of seconds to leverage big data. "The world is equipped with human sensors--more than 7 billion and counting," Corley notes. "It's by far the most extensive sensor network on the planet. What can we learn by paying attention?" For example, Corley believes effective social media analysis could enable emergency responders to receive early information about natural disasters. He can analyze an enormous data set in less than 10 seconds with SALSA, which uses PNNL's Institutional Computing resource to access the Olympus computer cluster, which ranks among the Top 500 fastest supercomputers in the world. The team identifies baseline activity, gathers data to find routine patterns, and finds patterns that indicate atypical activity. Corley's program accurately captures the spirit of a social media comment more than 75 percent of the time, and accurately determines social media patterns more than 90 percent of the time.


How Wearable Tech Will Fuel the Internet of Things
InfoWorld (06/05/13) Ted Samson

Wearable technologies are gaining popularity among consumers and garnering the attention of various companies and public-sector organizations, and these tools are expected to play an integral role in the Internet of things. Wearable technologies are in use by 18 percent of the population in the United States and United Kingdom, according to a new Rackspace study. Most wearable technology users, including 82 percent of Americans and 71 percent of Brits, say the devices are improving their lives. "The rich data created by wearable tech will drive the rise of the 'human cloud' of personal data," says Chris Brauer of Goldsmiths, University of London. "With this comes countless opportunities to tap into this data; whether it's connecting with third parties to provide more tailored and personalized services or working closer with health care institutions to get a better understanding of their patients." In addition, Brauer says the public sector will use wearable technology to manage public health and smart city programs. Consumers reported various benefits to wearable technology, with 61 percent feeling more informed, 37 percent reporting improved career advancement, and 61 percent noting personal efficiency gains, according to Rackspace. However, privacy concerns remain an obstacle to adoption, with 51 percent of Americans and Brits citing worries about privacy and two-thirds saying wearable devices should be regulated in some form.


Remembering Objects Lets Computers Learn Like a Child
New Scientist (06/05/13) Douglas Heaven

Imperial College London researchers have added object recognition to a computer-vision technique called simultaneous location and mapping (SLAM). A SLAM-enabled computer has a camera to orient itself in new surroundings as it maps them. The system, called SLAM++, involves the computer constantly trying to match the points and lines it sees to objects in its database. As soon as it finds a shape it can identify, that area of the map can be filled in. Although the database is currently prepared by hand, the next version will enable the system to add new objects itself as it encounters them. "It's similar to how a child learns about the world," says University College London researcher Renato Salas-Moreno. The database also lists the properties of the stored objects, so when the computer identifies a chair, it will know what they are used for, their typical weight, and which way up they go. Such knowledge will help digital avatars interact with the real world in augmented reality applications. Technical University in Munich researcher Stefan Hinterstoisser says the technology could have a big impact on robotics, games, and films. "It's a very significant improvement over the state of the art," he notes.


NJIT Researcher Shows Data Mining EMRs Can Detect Bad Drug Reactions
New Jersey Institute of Technology (06/05/13) Sheryl Weinstein

Electronic medical records can validate previously reported adverse drug reactions and report new ones, according to New Jersey Institute of Technology (NJIT) researchers. They examined the use of retrospective medication orders and inpatient laboratory results documented in the medical records to identify adverse reactions. The researchers then correlated abnormal lab results with specific drug administrations by comparing the outcomes of a drug-exposed group and a matched unexposed group. "Recently, electronic medical records [EMRs] have emerged as a valuable resource for detecting bad drug reactions," says NJIT professor Mei Liu. Her goal is to develop data-mining methodologies to uncover clinical knowledge from EMRs to improve the quality, safety, efficiency, and effectiveness of healthcare. "EMRs have created an unprecedented resource for observational studies since they contain not only detailed patient information, but also large amounts of longitudinal clinical data," Liu says. The NJIT researchers also are interested in other data-mining tasks for clinical informatics, such as drug repurposing and using patient medical records to build predictive models for diseases such as diabetes and cancer.


Abstract News © Copyright 2013 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe