Association for Computing Machinery
Welcome to the July 31, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Obama Rolls Out Plan to Boost U.S. Supercomputer Prowess
IDG News Service (07/30/15) John Ribeiro

U.S. President Barack Obama signed an executive order this week to coordinate government agencies, academia, and the private sector to develop high-performance computing (HPC) systems. One goal of the National Strategic Computing Initiative (NSCI) is to accelerate the delivery of "a capable exascale computing system that integrates hardware and software capability to deliver approximately 100 times the performance of current 10-petaflop systems across a range of applications representing government needs." Another goal will be to provide a workable pathway for continued supercomputing development over the next 15 years, even after the limits of current semiconductor technology are reached. "Maximizing the benefits of HPC in the coming decades will require an effective national response to increasing demands for computing power, emerging technological challenges and opportunities, and growing economic dependency on and competition with other nations," Obama wrote in the order. The U.S. Department of Energy, the Department of Defense, and the National Science Foundation will lead the NSCI, with each agency concentrating on a different HPC area. They will coordinate with the Intelligence Advanced Research Projects Activity and the National Institute of Standards and Technology. The former will focus on substitutes for standard semiconductor computing technologies, and the latter will be dedicated to measurement science.


Facebook Taking Open Source Software Ethos to Drones
The New York Times (07/30/15) Quentin Hardy

Facebook is undertaking advanced telecommunications development projects via an open source framework in which the company makes large volumes of the data and insights it generates publicly available. "Getting people to adopt the Internet faster is our end goal," says Facebook's Jay Parikh. One project, Aquila, is building out autonomous drones as part of a collaboration between specialists in areas such as solar power and battery materials, space laser scientists, aviation engineers, and Facebook technologists. Facebook successfully increased space laser data transmission rates from about 2 Gbps to 10 Gbps partly thanks to collaboration with fiber-optic experts. Yael Maguire with Facebook's Connectivity Lab says this was achieved with methods that include detecting information through different wavelengths. "We've mashed up a lot of industry experts," he notes. "There's a lot of learning they get by coming to Facebook." Moreover, the team has used Facebook's acumen in analyzing data such as family snapshots for tasks that include analyzing satellite images of villages to accurately assess populations and economic activity. The lasers' speed may be upgraded to 100 Gbps, and they are expected to be in two-way communication with small ground-based antennas, which will then send and receive wireless data with inexpensive phones.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Microsoft Takes You Through the Steps in HoloLens Video Creation
Phys.org (07/29/15) Nancy Owano

Microsoft researchers have released a video that guides viewers through the system they use for creating high-quality free viewpoint video, which can be compressed for bandwidth applicable to consumer applications. The researchers also published a paper detailing the technique used to record live-action holographic video for the HoloLens headset. "We start by capturing performances with 106 synchronized RGB and infrared (IR) cameras on a calibrated green­screen stage," the researchers say. "We subtract the background to compute silhouettes, then schedule the data for processing." The first step generates a three-dimensional point cloud starting with stereo depth maps from RGB and IR pairs. Points from the depth maps are merged and refined locally, after which the cloud is refined globally using a multi­view stereo algorithm. The next step creates a mesh per frame, and surface reconstruction is modified to produce meshes constrained by the silhouettes. The researchers apply topological de-noising as part of their approach for cleanup, reaching a stage in which they have 1 million triangles per frame. They next see which areas contain perceptually important details such as hands or faces, and they preserve the geometry of texture in these areas.


How to Help Self-Driving Cars Make Ethical Decisions
Technology Review (07/29/15) Will Knight

Stanford University professor Chris Gerdes and California Polytechnic State University professor Patrick Lin are investigating the ethical ramifications of self-driving vehicles. They say this is important as technology moves forward and cars become more adept at interpreting more complex scenarios, since automated driving systems may have to make split-second decisions that raise ethical issues. Gerdes offers as an example a self-driving car confronted with a child dashing into the street, forcing it to decide between hitting the child or swerving into an oncoming van. "If it would save the child's life, could we injure the occupant of the vehicle?" he asks. "These are very tough decisions that those that design control algorithms for automated vehicles face every day." Gerdes urged researchers, automotive engineers, and automotive executives at a recent Stanford workshop to be ready to consider the ethical implications of the technology they are developing. University of South Carolina professor Bryant Walker-Smith says automotive engineering already incorporates numerous ethical decisions, noting for example that airbags carry the innate assumption "that you're going to save a lot of lives, and only kill a few." He also contends deploying self-driving technology too slowly could itself be considered unethical, in view of the current number of deadly traffic accidents involving human error.


P2PVALUE Project Drives Peer Production Forward
CORDIS News (07/28/15)

Linux, Wikipedia, and Open Street Map were all developed thanks to commons based peer production (CBPP), and a European Union-funded project is now investigating the factors and tools that can encourage the continued development and use of the CBPP model. The P2PVALUE project is exploring the value of peer to peer production projects, mapping the spread of initiatives, and investigating the conditions that favor collaborative creation efforts. The project has developed software that offers some of the features of Google Drive--an open source digital platform based on decentralized architecture--but without the privacy and transparency issues. P2PVALUE also has developed a directory of information on existing CBPP communities and their main characteristics, which will serve as a key resource for research in the emerging field. Moreover, the project is working on the theory and policy aspects of CBPP. "Our ideas for future work include the automatic collection of data from the platform and other sources, data visualization, and more mechanisms to increase users' contributions," says the University of Surrey's David Rozas.


Shoring Up Tor
MIT News (07/28/15) Larry Hardesty

Researchers at the Massachusetts Institute of Technology and the Qatar Computing Research Institute (QCRI) have demonstrated a vulnerability in Tor's design they plan to discuss next month at the Usenix Security Symposium in Washington, D.C. The researchers found an adversary could infer a hidden server's location by analyzing the traffic patterns of encrypted data passing through a single computer in the all-volunteer Tor network. Tor's routing scheme features successive layers of encryption known as onion routing, and its routers can be used as "introduction points" without revealing their location, or "rendezvous points" that enable a host to identify another router in the Tor network and build a second circuit through it. The researchers devised an attack on this system in which an adversary's computer served as the guard on a Tor circuit. Because guards are selected at random, if an adversary connected enough computers to the Tor network, the odds are high that at least some of them would be well-positioned to snoop. The researchers showed that by looking for patterns in the number of packets passing in each direction through a guard, machine-learning algorithms could, with 99-percent accuracy, determine whether the circuit was an ordinary Web-browsing circuit, an introduction-point circuit, or a rendezvous-point circuit. QCRI's Mashael AlSabah says such attacks could be thwarted by masking sequences so all appear the same.


Spintronic Devices Possible Without Magnetic Material
IEEE Spectrum (07/28/15) Dexter Johnson

Argonne National Laboratory researcher Stephen Wu has found it may be possible to generate spin current from insulators without a magnetic material, a discovery that could lead to the development of more powerful computers. The discovery was made as Wu was examining different materials to produce smaller spintronic devices and provide greater control over thermal gradients, which need to be applied to the material to start the current spin of the electrons. Wu's discovery involved yttrium iron garnet (YIG) on a substrate of paramagnetic gadolinium gallium garnet (GGG). Because GGG is a paramagnet and not a ferromagnet, Wu did not expect to see any spin because the paramagnet does not generate a magnetic field. However, Wu and other researchers observed the spin current was stronger in the GGG than in the YIG. "We think that there may be other new physics working here," says Argonne physicist Anand Bhattacharya. The researchers see an opportunity to push ahead the state of the art in spintronics even though the physics behind it are not fully understood. "In a spintronic device you don't have to use a ferromagnet," Wu says. "You can use either a paramagnetic metal or a paramagnetic insulator to do it now."


Study Uncovers Unsupervised Learning Framework for Image Sentiment Analysis
CIO Australia (07/28/15) Rebecca Merrett

A new study from Arizona State University details the Unsupervised Sentiment Analysis (USEA) framework, a scheme that uses both textual and visual data in a single model for learning to analyze sentiment on a large number of social media images. "In order to utilize the vast amount of unlabeled social media images, an unsupervised approach would be much more desirable," the researchers note. USEA infers sentiments by blending visual data with accompanying textual data. As textual data is often incomplete with few tags, or overabundant with irrelevant comments, relying on it alone is problematic when conducting sentiment analysis. The researchers employed the supporting textual data to provide semantic information on the images to facilitate unsupervised learning. They compiled 140,221 images from Flickr users and 131,224 from Instagram users, and constructed a framework to classify images as positive, negative, or neutral by examining image captions and associated comments. USEA extracted visual features from the images via large-scale visual attribute detectors, with term frequency and stop words used to form text-based features. The researchers note the framework outperformed other sentiment analysis algorithms such as Senti API, and it demonstrates that deep-learning strategies are effective, but still primarily follow a supervised learning framework.


Scientist Working to Make Computers Human
New Indian Express (India) (07/27/15) Papiya Bhattacharya

Partha Pratim Talukdar, a researcher with the Indian Institute of Science in Bangalore, is working with neuroscientists to better understand how the brain searches its store of knowledge while reading a text document. The brain uses pre-acquired background knowledge to enrich a person's understanding of the content. Talukdar, a researcher in the Supercomputer Education and Research Center, believes similar background knowledge would improve the comprehension of computers. He believes the brain's cognitive processes when it deals with language can be borrowed or adapted to design algorithms for computers. Talukdar says he wants "to make the knowledge present in Web pages, blogs, and emails accessible to these machines and computers automatically, with minimum human input." Background knowledge available to computers, for example, would enable them to perform automated tasks such as translating a document from one language to another. Talukdar also is interested in the cognitive processes in the brain when it deals with language. Making use of brain-imaging techniques, he is trying to understand how the brain searches its store of knowledge while reading a text document.
Talukdar thinks such methods can be borrowed or adapted in designing algorithms for computers.


CHAMELEON: Cloud Computing Testbed Goes Into Full Production
Texas Advanced Computing Center (07/28/15) Faith Singer-Villalobos

The Computation Institute at the University of Chicago, Argonne National Laboratory (ANL), and the Texas Advanced Computing Center (TACC) at the University of Texas at Austin announced Chameleon, an experimental testbed that enables the academic research community to develop and experiment with novel cloud architectures and pursue new applications of cloud computing. The Chameleon hardware will ultimately consist of 650 cloud nodes with 5 petabytes of storage and a 100-Gbps network between the sites. Chameleon will enable users to test new virtualization technologies that enhance the reliability, security, and performance of cloud computing. "Chameleon provides a platform for computer scientists and other researchers to explore techniques and tools to make cloud computing systems and future computing platforms more effective," says TACC executive director Dan Stanzione. Chameleon includes persistent infrastructure clouds, and provides "bare metal" provisioning of hardware in which users can specify and modify the full software stack on which they experiment. Researchers can use Chameleon to mix and match hardware, software, and networking components and test their performance. Chameleon can be used for "everything from big data to big compute, exploring both homogenous and heterogeneous hardware capabilities, and accommodating a wide range of user skills from research to education," says Chameleon primary investigator Kate Keahey, a computer scientist at ANL.


Smart Mirror Monitors Your Face for Telltale Signs of Disease
New Scientist (07/27/15)

A new mirror can assess the health of someone looking into it by analyzing facial expressions, fatty tissue, and how flush or pale the person is. Wize Mirror consists of facial-recognition software that looks for telltale markers of stress or anxiety. The device also makes use of gas sensors to collect samples of the user's breath and check for certain compounds, and features three-dimensional scanners to analyze face shape for weight gain or loss, as well as multispectral cameras to estimate heart rate or hemoglobin levels. The software takes about a minute to analyze the data, and then produces a score that tells the user how healthy they seem and also displays personalized information on how to make health improvements. A consortium of researchers and industry partners from European Union (EU) countries, using EU funding, developed the technology. The National Research Council of Italy is coordinating the project, and views the device as a tool that could help address long-term health issues such as heart disease and diabetes. Clinical trials of the device are scheduled to begin next year.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Cybersecurity Research Institute Receives $1.73B in DOD Funding
FedScoop (07/28/15) Grayson Ullman

The U.S. Department of Defense (DOD) says it has renewed its contract with the Software Engineering Institute (SEI) at Carnegie Mellon University, a federally funded research and development center chartered to study cybersecurity and software engineering.  The five-year extendable contract has a ceiling of $1.73 billion, and will enable SEI to continue its work to develop innovative solutions to cyberthreats, which range from common vulnerability exploits to nation-state actor assaults. SEI is one of 10 DOD research and development centers nationwide that works closely with other labs.  Its areas of interest lie across the cyber spectrum, from insider threat negation and defensive software architecture to the development of more secure coding. "Think about where we've come in terms of software," says SEI deputy director Robert Behler. "Software adds a lot of functionality to anything: the new cars we're driving are software-defined." Behler also notes, "with that software comes a lot of the baggage. The more software, the more complexity; the more complexity, the more vulnerabilities." Behler warns the struggle to develop effective best practices and defenses against cyberthreats will only grow more complex with time.


Turing Award-Winner Stonebraker on the Future of Taming Big Data
Forbes (07/29/15) Gil Press

In a lecture at the 9th Annual Massachusetts Institute of Technology (MIT) Chief Data Officer & Information Quality Symposium, 2014 ACM A.M. Turing Award recipient Michael Stonebraker discussed how the future of big data usage hinges on several factors. One factor, which he and his collaborators are attempting to address, is the need to exploit falling computer memory costs to store data longer and retrieve it at faster speeds. Another key challenge Stonebraker's team is concentrating on solving is what he calls "big analytics on big volumes of data." Stonebraker says the growing need for running complex analytics on the increasing volumes of data leads to an "array database" solution supporting sophisticated statistical procedures that are beyond the capabilities of table-based, traditional databases. Stonebraker also foresees data integration as a future big data trend, which he terms the "Big Variety" problem. He argues the Extract-Transform-Load data integration problem lacks scalability, and he founded the Tamr startup in 2013 to develop a combined solution of automated machine learning and crowdsourcing of domain experts. Stonebraker and colleagues devised a method for masking data silos by superimposing on them a software layer that adjusts to the constantly changing semantic environment of the organization, based on a human-computer collaborative process.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe