Association for Computing Machinery
Welcome to the August 31, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Pursuing Electronics That Bend, Pentagon Advances Partnership With Tech Firms
The Wall Street Journal (08/28/15) Gordon Lubold

U.S. Defense Secretary Ash Carter has announced a Pentagon project to develop flexible electronic devices with Silicon Valley companies and scientists. He says the Pentagon will grant $75 million in seed money to develop a Flexible Hybrid Electronics Manufacturing Innovation Hub in San Jose, CA. "By seamlessly printing lightweight, flexible structural integrity sensors right onto the surfaces of our ships and aircraft, or folding them into cracks and crevices where rigid circuit boards and bulky wiring could never fit, we'll be able to have real-time damage reports--making the stuff of science fiction into reality," Carter says. He also notes soldiers could use sensors and electronic equipment embedded in their attire, and the technology could be used to develop smart prosthetics for wounded troops that could have "the full flexibility of human skin." Carter says the tech industry has long been hesitant to approach the Pentagon to bring their work to the military to help share technologies for the mutual benefit of both sectors. "We're drilling tunnels through that wall that separates government from scientists and commercial technologists--making it more permeable so more of America's brightest minds can contribute to our mission of national defense, even if only for a time," he says.
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 


This Girl's Summer Camp Cold Help Change the World of AI
Wired (08/31/15) Davey Alba

A new summer camp at Stanford University is aiming to help bring more women into computer science with an emphasis on artificial intelligence (AI). SAILORS, the Stanford Artificial Intelligence Laboratory's Outreach Summer program, is the U.S.'s first AI summer camp for girls. The camp, which was launched by Stanford graduate Olga Russakovsky, who spent eight years at the school's AI lab, has the support of more than 40 Stanford professors, and postdoctoral and graduate students, as well as the sponsorship of several leading tech firms. Russakovsky modeled the program after her experiences attending and later serving as a counselor at Stanford's math camp when she was a teenager. After pulling together a group of volunteers, Russakovsky began advertising the camp on Stanford's website. The school had to quit advertising the program after it accumulated 300 applications in just three days, of which it was only able to accept 24. The camp curriculum centers around AI and tasks campers with using algorithms to study DNA, applying natural-language processing to Twitter, and programming simple robots. They also get the chance to meet prominent AI researchers at the university, as well as female role models in the industry, with field trips to locations such as Google headquarters and the Silicon Valley Computer History Museum.


Research Aims for Bitcoin Science to Catch Up With Growth in Usage
The Baltimore Sun (08/28/15) Scott Dance

Bitcoin burst onto the scene in 2009 "sort of out of nowhere," says University of Maryland, College Park professor Jonathan Katz, who notes the technology was quickly adopted by people before any serious analysis was done. "It mushroomed into this thing where thousands of people are using it and still people don't understand the precise security properties of it," Katz says. However, the U.S. National Science Foundation now is funding research at several universities, including the University of Maryland, into Bitcoin and other crypto-currencies to learn more about the technology. After a surge of interest in the crypto-currency during 2013, the use of Bitcoin and its value have largely leveled off, but even the dedicated community built around Bitcoin is struggling to understand it. For example, there currently is a controversy over the size of blocks, the pieces of data that record Bitcoin transactions and are central to the crypto-currency's security. Katz says University of Maryland researchers are examining the security mechanisms that underlie Bitcoin. They also plan to examine how Bitcoin's transaction technology could be incorporated into computer-coded contracts, which could have applications in peer-to-peer micropayments and hedging financial portfolios. "We want to allow people to understand what Bitcoin provides, and it's up to people's own personal preferences whether they choose to use it," Katz says.


Defusing Photobombs: Researchers Find Ways to Remove Distractions From Photos
Princeton University (08/27/15) John Sullivan

Princeton University and Adobe scientists collaborated on a one-click system for automatically eliminating distracting elements from photos, which was presented at the 2015 Computer Vision and Pattern Recognition Conference in Boston. To identify these distractors, the researchers utilized two data sets of images, and then employed Amazon's Mechanical Turk system to vet the first data set of 1,073 images and identify distracting elements. "The nice part about this set is that we have multiple entries per image," notes Princeton graduate student Ohad Fried. "The problem is that these are people who don't necessarily care about the images." For the second set of more than 5,000 images, the researchers used iPhone's Fixel app to flag alterations to images that photographers had retouched and exported, shared, or saved. Fried says this data was applied toward the creation of specific distractor detectors, such as those designed to remove cars and faces. The team also produced a weighting system that assigned values for distinct configurations of colors and shapes in photos, and then invented programs to teach the software to identify and eliminate elements with the characteristics of distractors.


Why the World's Top Computing Experts are Worrying About Your Data
IDG News Service (08/26/15) Katherine Noyes

Many of the world's top computer science experts met last week at the Heidelberg Laureate Forum to determine how the widespread collection of data about consumers can be prevented from causing harm in the future.  Much of today's data collection happens on the websites people visit, and that can spill over into surveillance by governments, according to the Electronic Frontier Foundation's (EFF) Jeremy Gillula.  Most of the participants at the forum agreed there is a need for better mechanisms for protecting individuals' privacy, as well as for more transparency on the part of those collecting and using the data.  "We need a policy approach" that offers not just privacy by design, but privacy by default, says Carnegie Mellon University professor Alessandro Acquisti.  Although public policy and legislation are one approach to the problem, some experts do not see much reason for optimism in that direction.  The EFF already has published a "Do Not Track" policy, which organizations can adopt, and it is working on a Privacy Badger, a browser extension for Firefox and Chrome that blocks spying ads and invisible trackers. The EFF also advocates end-to-end encryption because government agencies cannot do mass surveillance if all the data is encrypted, according to Gillula.


Exploring Large Data for Scientific Discovery
HPC Wire (08/27/15) Scott Gibson

In his lecture at the recent XSEDE15 conference, University of Utah researcher Valerio Pascucci said obtaining clean, guaranteed, unprejudiced results in data analyses demands highly interdisciplinary, multi-scale collaboration and methods that unify the math and computer science underlying the applications used in physics, biology, and medicine. He noted producing techniques with little room for heuristics and that express a formalized mathematical strategy requires cross-pollination between, and integration of, data management and analysis, which have traditionally been conducted by different communities. Collaboration is thus a necessity for a successful supercomputing center or cyberinfrastructure, according to Pascucci. He stressed the value of an on-the-fly processing platform in large dataset management, as researchers must be able to come to decisions under incomplete information. Pascucci cited his team's ViSUS framework for processing large-scale scientific data with high-performance selective queries on multiple terabytes of raw data as a case in point. ViSUS enables processing on various devices via its integration with progressive streaming methods. Pascucci said in the overall vision of coping with massive data volumes, integrated management, analysis, and visualization of large data can trigger a cycle of collaboration, which can help drive the work, provide formal theoretical approaches, and supply feedback to specific disciplines.


Understanding the Google Computer, and Making it Better
CCC Blog (08/26/15) Helen Wright

Google engineers, in collaboration with researchers at Harvard University, recently presented detailed performance characteristics on warehouse-scale computers that power Internet-based services and the cloud.  The paper presented at this summer's International Symposium on Computer Architecture was based on the results of a longitudinal study spanning tens of thousands of servers in Google data centers.  Google workloads demonstrate significant diversity, which has been increasing over the years, writes Computing Community Consortium (CCC) executive council member Mark D. Hill. He also says they differ in some significant ways from the traditional SPEC benchmarks often seen in common architectural studies.  Of note, workloads running on the Google computer have significantly lower useful work, or instructions per cycle, done on their processors, and also suffer from a significantly larger fraction of front-end pipeline pressure from instruction stalls.  Moreover, "while there are no significant hot spots at the individual workload level, across all the workloads at the warehouse-scale computer level, a few common low-level functions in the software stack account for nearly one third of the total cycles!" Hill notes. He says the paper is a great opportunity for the CCC community to help shed more light on this class of computer systems.


Our Number's Up: Machines Will Do Maths We'll Never Understand
New Scientist (08/26/15) Jacob Aron

Some mathematicians think the field of mathematics is approaching a limit that cannot yield further breakthroughs without computerized assistance. Although innovations such as proof assistants have helped, using a computer to prove math concepts is still so laborious most mathematicians do not bother. One solution being pursued by Princeton University's Vladimir Voevodsky seeks to redefine math fundamentals to make math easier for computers to understand. He notes, for example, that because computers have multiple ways of defining certain mathematical objects in terms of sets, if two computer proofs use different definitions for the same thing, they will be incompatible. "The existing foundations of maths don't work very well if you want to get everything in a very precise form," Voevodsky says. His strategy substitutes sets with types, which define mathematical objects under stricter terms in which each concept has only one definition. The approach enables mathematicians to organize their ideas with a proof assistant directly, without having to translate them later. However, Voevodsky notes there are levels of mathematical complexity and abstraction that are far beyond mathematicians' capabilities to attain, and computers may hold the answer. This leads to speculation that computers could eventually surpass human mathematicians to the point they become unnecessary.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Computers Can Predict Schizophrenia Based on How a Person Talks
The Atlantic (08/26/15) Adrienne LaFrance

Disjointed patterns of speech are recognized as a hallmark of schizophrenia, and several studies have found doctors can use such speech patterns to predict whether or not a person will develop psychosis with about 79-percent accuracy.  However, a new algorithmic analysis technique is even better at using speech patterns to predict psychosis.  Researchers at Columbia University, the New York State Psychiatric Institute, and IBM's T.J. Watson Research Center used an automated speech-analysis program to predict, with 100-percent accuracy, whether or not at-risk young people would develop psychosis over the next two-and-a-half years. The program searched transcripts of their interviews with doctors for incidents of disorganized speech.  "In our study, we found that minimal semantic coherence--the flow of meaning from one sentence to the next--was characteristic of those young people at risk who later developed psychosis," says IBM researcher Guillermo Cecchi.  The study did have limitations--the sample size was only 34 people and it examined only written transcripts.  The researchers hope larger studies and analyses that also take into account other features such as a patient's intonation, volume, and cadence could yield even more revealing results.


Uber Project May Improve Autonomous Cars' Vision
Technology Review (08/25/15) Tom Simonite

Uber is collaborating with the University of Arizona on a mapping and safety project for self-driving cars.  Uber will work with the College of Optical Sciences, and a recent online post from the department describes technology available for licensing that could make a smaller and less expensive version of the Lidar scanners that sit on top of Google's prototype driverless cars.  The technology is designed to control its laser beams via a chip covered with tiny movable mirrors to steer them.  "The promise of autonomous vehicles will need to utilize less complex and costly systems to become practical," the post says. University of Arizona professor Yuzuru Takashima is leading the effort to develop better technology for creating three-dimensional (3D) maps and improve the way autonomous cars sense and interpret their surroundings. Uber's state-of-the-art mapping test vehicles will begin operating from the university's campus. Google's vehicles have been able to drive themselves more than 1 million miles on freeways and urban streets because of extremely detailed 3D maps, which include every light pole and curbstone.


Tapping Into the Emotional Internet
TechCrunch (08/23/15) Gareth Price

The coming years will see the advent of emotion-sensing wearable technology, and the sharing of the data generated by these wearables will mark the era of the "emotional Internet." Initiatives are already afoot to measure human emotion in a way machines can interpret. For example, Microsoft Research's Visualization and Interaction for Business and Entertainment division is investigating human-computer interaction, and this effort has yielded a prototype scarf that uses sensors to determine the wearer's mood and, via Bluetooth, the moods of others. A major challenge to the emerging field of affective computing is setting standards for definitive emotional states. This milestone will make it possible to quantify and upload such data into a computer and produce devices that precisely engage with emotions. The increasing ubiquity of social media would indicate the inevitability of people sharing that emotional data with others, to which emotion-sensing wearables will add another, deeper dimension. If we have a better grasp on our psychological underpinnings, it will be possible to learn how to improve our emotional states as well as our relationships with others.


Estimate: Human Brain 30 Times Faster than Best Supercomputers
IEEE Spectrum (08/26/15) Jeremy Hsu

Two Ph.D. students from the University of California, Berkeley, and Carnegie Mellon University are seeking to determine how soon artificial intelligence might exceed the capabilities of the human brain.  One of Katja Grace and Paul Christiano's first effort toward this goal is finding ways to compare the computational performance of the brain to that of supercomputers.  They settled on traversed edges per second (TEPS), a measure of how quickly a computer can move information around within itself.  Comparing the TEPS benchmarks of supercomputers to a rough estimate of how frequently neurons in the brain fire off electrical signals, the researchers found the human brain is at least as powerful and as much as 30 times more powerful than IBM's Sequoia, the highest-ranked supercomputer.  They used this estimate to peg the cost of human-equivalent computer performance at between $4,700 and $170,000 per hour, a level that computer hardware might be able to achieve within seven to 14 years.  The researchers say they hope to eventually build "a quantitative model of how fast artificial intelligence research should be expected to grow in an economy with increasing quantities of artificial intelligence available to do research."


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe