Welcome to the November 6, 2013 edition of ACM TechNews, providing timely information for IT professionals three times a week.
Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
Quantum 'Sealed Envelope' System Enables "Perfectly Secure" Information Storage
University of Cambridge (11/04/13)
Cambridge University researchers say they have achieved a breakthrough in quantum cryptography by demonstrating that information can be encrypted and then decrypted with complete security using a "sealed envelope" system based on quantum theory and relativity. The researchers say they sent encrypted data between pairs of sites in Geneva and Singapore that was kept "perfectly secure" for 15 milliseconds using a "bit commitment" protocol. The system could be the first step toward impregnable information networks controlled by "the combined power of Einstein's relativity and quantum theory," according to the researchers. Bit commitment is a mathematical version of a securely sealed envelope. The researchers say the technique could be used for a variety of applications, including global financial trading, secure voting, and long-distance gambling. "This is the first time perfectly secure bit commitment--relying on the laws of physics and nothing else--has been demonstrated," says Cambridge's Adrian Kent. The researchers note that bit commitment is a building block that can be put together in lots of ways to achieve increasingly complex tasks. "I see this as the first step towards a global network of information controlled by the combined power of relativity and quantum theory," Kent says.
New Supercomputer Uses SSDs Instead of DRAM and Hard Drives
IDG News Service (11/04/13) Agam Shah
Lawrence Livermore National Laboratory (LLNL) this month is deploying Catalyst, a new supercomputer that uses solid-state drive (SSD) storage as an alternative to dynamic random access memory and hard drives, and delivers a peak performance of 150 teraflops. Catalyst has 281 terabytes of total SSD storage and is configured as a cluster broken into 324 computing units, each of which has two 12-core Xeon E5-2695v2 processors, totaling 7,776 central processing unit cores. Catalyst is built around the Lustre file system, which helps break bottlenecks and improves internal throughput in distributed computing systems. "As processors get faster with every generation, the bottleneck gets more acute," says Intel's Mark Seager. He notes that Catalyst offers a throughput of 512GB per second, which is the same as LLNL's Sequoia, the world's third-fastest supercomputer. Although Catalyst's peak performance is nowhere close to the world's fastest high-performance computers, its use of SSD technology is noteworthy. Experts say SSDs are poised for widespread enterprise adoption as they consume less energy and are becoming more reliable. For example, faster SSDs increasingly are replacing hard drives in servers to improve data access rates, and they also are being used in some servers as cache, where data is temporarily stored for quicker processing.
How to Program Unreliable Chips
MIT News (11/04/13) Larry Hardesty
Massachusetts Institute of Technology (MIT) researchers have created a programming framework called Rely that enables software developers to designate when computing errors are tolerable, in anticipation of an era of imperfect chips. As gains in processing power reach the limits of Moore's Law, some experts are exploring the idea that imperfect decoding might allow continued gains in speed and energy efficiency. Martin Rinard’s research group at MIT’s Computer Science and Artificial Intelligence Laboratory has created a framework that enables coders to specify when errors are permissible, then calculates the probability that the software will run as intended. Last week the group won a best-paper award at ACM's Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA) conference. Using Rely, developers can add a period, or "dot," to program instructions that they think can tolerate some error. The dot tells Rely to evaluate the program's execution using specified failure rates. If permitting the errors yields unacceptable results, developers can move the dots and reevaluate. As a next step, the researchers plan to enable the system to let programmers specify the accepted failure rate for whole blocks of code, for example, to automatically designate that pixels in a frame of video must be decoded with 97 percent reliability.
NSA's Reported Tampering Could Change How Crypto Standards Are Made
Government Computer News (11/04/13) William Jackson
The U.S. National Institute of Standards and Technology (NIST) is formally reviewing its cryptographic standards development processes to address a loss of public confidence following reports that the U.S. National Security Agency (NSA) weakened NIST standards. A random number generator included in NIST recommendations for developing cryptographic keys is vulnerable to attacks that can uncover the cryptographic keys, according to documents released by former NSA contractor Edward Snowden. "Our mission is to protect the nation's IT infrastructure and information through strong cryptography," says an NIST statement announcing the review. "We cannot carry out that mission without the trust and assistance of the world's cryptographic experts." NIST is cataloging its development processes' goals and objectives, principles of operation, processes for identifying algorithms for standardization, and methods of review. Public comments will be considered, and an outside organization will assess the process. In addition, NIST will review its existing cryptographic work to ensure that its development is in line with the standards. NIST develops standards in partnership with government and industry, and the agency does not intend to stop working with the NSA, says NIST's Matthew Scholl. "We have worked with the NSA for a long time on many different projects and will continue to do that," he says.
Google Refining Flu Spread Methodology as Flu Season Approaches
eWeek (11/03/13) Todd R. Weiss
Inaccurate, higher estimates for Google Flu Trends last year prompted Google to update its flu data analysis methods for the 2013-2014 flu season. Google's models examine the number of Web searches for information about the flu, which the company believes is a good indicator of flu levels. However, the estimated number of flu cases by Google Flu Trends data in January 2013 was much higher than the number of actual healthcare visits for influenza-like illnesses reported by the U.S. Centers for Disease Control (CDC). Google studied the discrepancies and believes the reason for the difference was that "heightened media coverage on the severity of the flu season resulted in an extended period in which users were searching for terms we've identified as correlated with flu levels," says Google's Christian Stefansen. "In early 2013, we saw more flu-related searches in the U.S. than ever before." Google plans to apply peak flu-level estimates from the 2012-2013 flu season to estimates for the 2013-2014 season. "A casual observer will see that the new model forecasts a lower flu level than last year's model did at a similar time in the season," Stefansen notes. "We believe the new model more closely approximates CDC data."
Bristol Researchers Work to Secure Next Generation Chip-Card Payment Technology
University of Bristol News (11/04/13)
University of Bristol researchers have mathematically proved that EMVCo's proposed protocol design for future EMV chip cards meets security goals. "This is an important step in validating the technology we will all start to use in the future," says Bristol professor Gaven Watson. "When the previous chip technology was designed people did not know how to mathematically prove that a protocol satisfied certain security goals. The science of cryptography has advanced and is now at a stage where this is possible and protocols that will be used in the real world can be fully analyzed." EMVCo is in the process of developing the specifications for the next-generation payment technology. The protocol offers the new specification a key agreement system based on elliptical curve cryptography. "EMVCo is of the view that the new cryptographic algorithms and protocols that will be used to secure billions of EMV payment transactions should not only offer optimum performance but also receive the best security analysis that modern cryptology can provide," says EMVCo's Christina Hulka. A paper that discusses the validation of the proposed protocol design will be presented at ACM's Conference on Computer and Communications Security in Berlin this week.
Solving the Tongue-Twisting Problems of Speech Animation
Technology Review (11/04/13)
BioVision Hierarchy (BVH) is the de facto standard for encoding body-motion data for use in motion-capture systems. "BVH has survived the company that created it and is now widely supported, presumably because it is simple and clearly defined, straightforward to implement, and human-readable," says Saarland University researcher Ingmar Steiner. However, although BVH is used to encode data from almost every form of motion capture, until now it has been unable to capture tongue articulation during speech. Steiner and his colleagues have developed a method for converting high-resolution tongue data into BVH format, combining data from several sources at the same time. The researchers demonstrated their approach on a standard database of existing tongue articulation recordings, including real-time magnetic resonance imaging, dental scans, and electromagnetic recordings. "This technique is by no means intended to provide an accurate model of tongue shapes or movements, as previous work using biomechanical models does,” the researchers say. "Rather, the advantage here is the lightweight implementation...where realistic animation is more important than matching the true shape of the tongue."
Synaptic Transistor Learns While It Computes
Harvard University (11/01/13) Caroline Perry
Researchers at the Harvard School of Engineering and Applied Sciences (SEAS) have developed a transistor that behaves like a human brain's synapse, which could lead to a new type of artificial intelligence that is embedded in a computer's architecture. The synaptic transistor controls the flow of information in a circuit and physically adapts to changing signals. "The transistor we've demonstrated is really an analog to the synapse in our brains," says SEAS postdoctoral fellow Jian Shi. "Each time a neuron initiates an action and another neuron reacts, the synapse between them increases the strength of its connection." Shi says a system with millions of synaptic transistors and neuron terminals could enable extremely efficient parallel computing. The synaptic transistor effects change using oxygen ions that move in and out of the crystal lattice of a samarium nickelate film. The nickelate's varying concentration of ions controls its ability to transmit information on an electrical current, with the strength of the connection depending on the electrical signal's time delay. The device, embedded in a silicon chip, consists of a nickelate semiconductor between two platinum electrodes, next to a small container of ionic liquid. Shi says the transistor offers non-volatile memory, inherent energy efficiency, and the potential for seamless integration into existing silicon-based systems.
Countering Click Spam
UCSD News (CA) (11/01/13) Doug Ramsey
Researchers at the University of California, San Diego (UCSD), Microsoft Research India, and the University of Texas at Austin have developed ViceROI, an algorithm designed to catch click-spam in search ad networks. "We designed ViceROI based on the intuition that click-spam is a profit-making business that needs to deliver higher return on investment--ROI--for click-spammers than other ethical business models in order to offset the downside risk of getting caught," says UCSD's Vacha Dave. Until now, ad networks normally responded to click-spam reactively. However, the lack of transparency often led to click-spam not being uncovered for years at a time. During testing, ViceROI flagged several hundred publishers that were resorting to click-spam of various sorts. "The ViceROI approach flags click-spam through all these mechanisms and ... is resilient against click-spammers using larger botnets over time," the researchers say in their paper. They also point out that the approach is now "ranked among the best existing filters deployed by the ad-network today while being far more general." The main challenge is tracing where fraudulent clicks come from. "Even ad networks are reluctant to talk openly about what's being done to combat fraud in this area, because it will inevitably lead spammers to find new ways around new technologies put in place at the ad-network level," Dave says.
Gimball: A Crash-Happy Flying Robot
Swiss Federal Institute of Technology in Lausanne (11/01/13)
Researchers at the Swiss Federal Institutes of Technology in Lausanne (EPFL) have developed Gimball, a spherical flying robot that is protected by an elastic cage that enables it to absorb and rebound from impacts. The robot keeps its balance using a gyroscopic stabilization system. "The idea was for the robot's body to stay balanced after a collision, so that it can keep to its trajectory," says EPFL's Adrien Briod. The researchers developed the gyroscopic stabilization system consisting of a double carbon-fiber ring that keeps the robot oriented vertically, while the cage absorbs shocks as it rotates. Most robots navigate using a complex network of sensors, which enable them to avoid obstacles by reconstructing the environment around them. However, the EPFL researchers say this is an inconvenient method. "The sensors are heavy and fragile," Briod notes. "And they can't operate in certain conditions, for example if the environment is full of smoke." Gimball is designed to handle the most difficult terrain. "Our objective was exactly that--to be able to operate where other robots can't go, such as a building that has collapsed in an earthquake," Briod says. "The on-board camera can provide valuable information to emergency personnel."
New Computing Model Could Lead to Quicker Advancements in Medical Research
Virginia Tech News (10/31/13) Lynn Nystrom
Virginia Tech researchers have developed data management and analysis software for data-intensive scientific applications in the cloud that could help speed up medical research. Virginia Tech professor Wu Feng is leading the research, which began in April 2010 when the U.S. National Science Foundation and Microsoft launched a collaborative cloud computing agreement, which ultimately funded 13 projects to help researchers integrate cloud technology into their research. Feng led a team in developing an on-demand, cloud-computing model. "Our goal was to keep up with the data deluge in the DNA sequencing space," Feng says. "Our result is that we are now analyzing data faster, and we are also analyzing it more intelligently." The model enables researchers worldwide to view the same data sets. "This cooperative cloud computing solution allows life scientists and their institutions easy sharing of public data sets and helps facilitate large-scale collaborative research," Feng says. His team built on this work by creating SeqInCloud and CloudFlow. SeqInCloud offers a portable cloud solution for next-generation sequence analysis that optimizes data management to improve performance and cloud resource use. Feng says CloudFlow enables the management of workflows, such as SeqInCloud, to "allow the construction of pipelines that simultaneously use the client and the cloud resources for running the pipeline and automating data transfers."
UTSA Researchers Develop Prototype Football Kicking Simulator
UTSA Today (10/30/13) KC Gonzalez
University of Texas at San Antonio (UTSA) researchers have developed the prototype components for a football kicking simulator designed to be a real-time training tool. The Football Kicking Simulation and Human Performance Assessment is a virtual training system that uses real-time wireless feedback and computer sensing to measure football kicking mechanics data. The researchers say the system gives kickers the ability to practice either on or off the field and receive the same kind of attention to detail experienced at a training camp. In addition, they note the quantitative data collected from the football dynamics and kicker's body motion can be used to predict the accuracy of a kick as well as provide visual feedback to maximize the kicking power while minimizing the risk of injury. "What sets our product apart from other kicking simulations is that we are using computer sensing and mathematical models to predict the football trajectory along with various training tools," says UTSA's Alyssa Schaefbauer. The researchers want to make the simulator available for coaches and football teams to use as a training tool. "The football kicking simulator is a perfect example of how engineering and science can make improvements beyond the scientific arena, such as football, that are of interest to the greater community," says UTSA professor Yusheng Feng.
Addressing the Threat of Silent Data Corruption
HPC Wire (10/31/13) Tiffany Trader
Researchers at the Los Alamos National Laboratory (LANL) are conducting a large-scale field test study of incorrect results on high-performance computing platforms to gain a better understanding of soft errors and silent data corruption (SDC). Soft errors can lead to unintended changes in the state of an electronic device that alters stored information without destroying functionality, says LANL's Sarah E. Michalak. SDC is a troubling type of soft error that occurs when a computing system delivers incorrect results without logging an error. In some cases, that can lead to incorrect scientific results, and in others, the application can hang for a long time or even indefinitely. "Silent data corruption has the potential to threaten the integrity of scientific calculations performed on high-performance computing platforms and other systems," the researchers note in a recent paper. SDC can be caused by many factors, and the main culprits include temperature and voltage fluctuations, particles, manufacturing residues, oxide breakdown, and electrostatic discharge. The researchers note that new technologies in which clock frequencies, transistor counts, and noise levels increase while feature sizes and voltages decline could increase the incidences of SDC, which would lead to reliability problems.
Abstract News © Copyright 2013 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe
|