Association for Computing Machinery
Welcome to the August 5, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


The National Strategic Computing Initiative Turns One
CCC Blog (08/04/15) Helen Wright

To mark the first anniversary of the U.S. National Strategic Computing Initiative (NSCI), the White House has published a blog detailing the progress federal agencies have made so far in setting a foundation for continued U.S. leadership in high-performance computing (HPC). Examples of such progress include the U.S. Department of Energy's investment in a slate of projects to continue developing the research base for "post-Moore's Law" computing. The projects include convenings, such as a workshop on neuromorphic computing and a science roundtable with representatives from national labs and universities on quantum-based sensors. Also notable is a solicitation from the U.S. National Science Foundation for the private sector to support new research to minimize the energy impacts of processing, storing, and transferring data within future computing systems. Meanwhile, the Quantum Science and Engineering Program, initiated by the U.S. Defense Department, is concentrating on accelerating critical technologies for quantum networks and sensors that are beyond the capabilities of classical systems. In addition, the U.S. National Institute of Standards and Technology announced a fall 2016 workshop on HPC security to identify security priorities and principles that should be incorporated into the NSCI.


Supercomputers Offer a Look at Cybersecurity's Automated Future
IDG News Service (08/05/16) Michael Kan

Seven teams' supercomputers on Thursday vied to demonstrate they could pinpoint software vulnerabilities without human assistance at the U.S. Defense Advanced Research Projects Agency's Cyber Grand Challenge at the DEF CON hacking conference in Las Vegas. Their ability to consistently detect simulated software bugs raises hope that such systems will be superior to and faster than humans in spotting and patching flaws. "We've proven that this automation is possible," says Cyber Grand Challenge program manager Mike Walker. The computers scored points by playing 96 rounds, in which they authored streams of new code both to show vulnerabilities existed in software and to patch flaws--all in a matter of minutes. The machines are still unproven in real-world environments, where the operating systems they would have to scan are larger and more encompassing of an entire ecosystem of software products than the ones they encountered in the contest. However, ForAllSecure's David Brumley, who designed the preliminary winning supercomputer in the competition, says such systems can make an immediate difference. A follow-up contest at DEF CON will pit the Cyber Grand Challenge's official winner against human players in a similar tournament.


AI Reads Your Tweets and Spots When You're Being Sarcastic
New Scientist (08/04/16) Edd Gent

Researchers at the University of Lisbon have developed a machine-learning system that can identify sarcasm on Twitter by examining a user's past tweets. The system uses the past tweets to develop a picture of a person that is detailed enough to guess when they are being sarcastic. The researchers say the system predicts sarcasm with an accuracy of 87 percent, which is slightly better than other existing methods. However, by learning to detect sarcasm without human input, the system should be easy to use. In addition, the new approach should work for any language and any online platform where posting history is available. "The key innovation is realizing you can build a model of the user merely based on what they have said in the past," says University of Lisbon researcher Silvio Amir. Monash University researcher Mark Carmen notes it should be straightforward to integrate the new approach with other types of social media analysis, such as tracking users' emotions or stock market trends. Analyzing sarcasm also could be a great help to marketers and customer service teams, as well as virtual assistants such as Apple's Siri.


DARPA Funds IoT Malware Detection Project
NextGov.com (08/03/16) Frank Konkel

The U.S. Defense Advanced Research Projects Agency is funding research into wireless monitoring of Internet of Things (IoT) devices for malware by measuring their thermal outputs. The agency has awarded a $9.4-million grant to the Computational Activity Monitoring by Externally Leveraging Involuntary Analog Signals (CAMELIA) project as part of a wider-ranging program to address IoT security. "If an Internet of Things device is attacked, the insertion of malware will affect the program that is running, and we can detect that remotely," says CAMELIA principal investigator and Georgia Institute of Technology (Georgia Tech) professor Alenka Zajic. She notes the project outlines a system that will compile a before and after recording of each combination of IoT device and software to produce a database. "If somebody inserts something into the program loop, the peaks in the spectrum will shift and we can detect that," Zajic says. "This is something that we can monitor in real time using advanced pattern-matching technology that uses machine learning to improve its performance." Although this profiling method can pinpoint where an IoT program code is executing with 95-percent accuracy, Georgia Tech professor Milos Prvulovic says malware detection is a much tougher challenge.


NYU, Google Researchers Hack the Business Model of Adware, Scareware, and Other Unwanted Software
New York University (08/04/16)

Researchers at New York University (NYU) and Google have found that some of the affiliates that distribute unwanted advertising and software bundled with legitimate downloads may be complicit in the scheme, which provides layers of deniability concerning installing the unwanted software. The research is the first analysis of the link between commercial pay-per-install (PPI) practices and the distribution of unwanted software. Commercial PPI is a monetization scheme in which third-party applications are bundled with legitimate applications in exchange for payment to the real software developer. When users install the package, they get the desired piece of software as well as a range of unwanted programs hidden within the download. One commercial PPI organization reported $460 million in revenue in 2014, according to the researchers. "Buried in the text that nobody reads is information about the bundle of unwanted software programs in the package you're about to download," says NYU professor Damon McCoy. The researchers studied four PPI affiliates by routinely downloading the software packages and analyzing the components. They determined the degree to which such downloads are personalized to maximize the chances their payload will be delivered.


Single-Pixel Camera Reaches Milestone, Mimicking Human Vision
Technology Review (08/03/16)

David Phillips at the University of Glasgow and colleagues have developed a way to use a single pixel to create images in which the central area is recorded in high resolution while the periphery is recorded in low resolution. Phillips says the technique mimics how animals produce a higher resolution in parts of their field of view. In animal vision systems, the retina has a central region of high visual acuity called the fovea surrounded by an area of lower resolution. The researchers have shown how to move the "foveated" region to follow objects within the field of view, which has some important potential applications, the most obvious being for imaging systems in which pixel arrays are not practical. However, Phillips says the team's single-pixel imaging system is more generally applicable because there is a trade-off between resolution and frame rate in all imaging systems. The technique can optimize this trade-off on the fly and enable attention to be focused on the parts of an image that are of greatest interest. Phillips notes machine-vision algorithms such as face recognition could be combined with the technique to make it much more powerful.


Dot-Drawing With Drones
McGill Newsroom (08/04/16) Chris Chipello

Researchers at McGill University are developing tiny drones to create dot drawings, an artistic technique known as stippling. The drones are equipped with a miniature arm, which holds a bit of ink-soaked sponge. As the drones hover near the canvas, internal sensors and a motion-capture system help position them to dab the ink in just the right places. To date, the flying robots have created stippling portraits of Alan Turing, Grace Kelly, and Che Guevara, among others. Each drawing consists of a few hundred to a few thousand black dots of varying sizes. McGill professor Paul Kry came up with the idea a few years ago as a way to do something about the blank hallways and stairwells in campus buildings. "I thought it would be great to have drones paint portraits of famous computer scientists on them," Kry says. He speculates as the research continues, eventually larger drones could be deployed to paint murals on hard-to-reach outdoor surfaces, including curved or irregular faces.


Students' App Helps Dementia Patients Find Memories
Cornell Chronicle (08/03/16) Bill Steele

Cornell University researchers have developed Remember Me!, an application that offers to help Alzheimer's patients stay connected to their memories and could help them keep a conversation going. "We thought we should build something that would help bring back memories," says Cornell researcher Karthik Venkataramaiah. He says the app is installed on the patient's phone, as well as those belonging to friends, family, and caregivers. The app uses global-positioning system tracking and a connection to the cloud to flash an alert to the patient when one of the group members is nearby. The phone tells the patient who is approaching, their relationship to that person, and a slideshow of previously uploaded pictures. If the patient receives a text or phone call from someone registered in the app, a screen pops up with similar information. The app also provides reminders based on stored facts and previous conversations, suggesting questions to ask based on information it has about life events. The researchers plan to apply natural-language processing techniques to the patient's speech patterns to predict what the patient might say next, while caregivers can send reminders to take medicine, make phone calls, or complete particular tasks.


Programmable Ions Set the Stage for General-Purpose Quantum Computers
Joint Quantum Institute (08/02/16) Chris Cesare

Researchers at the Joint Quantum Institute (JQI) and the Joint Center for Quantum Information and Computer Science at the University of Maryland have introduced the first fully programmable and reconfigurable quantum computer module. The team dubbed the new device a module because of its potential to connect with copies of itself, and it takes advantage of the unique properties offered by trapped ions to run any algorithm on five quantum bits (qubits). At the module's heart is a database that stores the best shapes for the laser pulses that drive quantum logic gates, and it uses software to translate an algorithm into the pulses in the database. The researchers tested their module on small instances of three problems that quantum computers are known to solve quickly, and they report two algorithms ran successfully more than 90 percent of the time, while a Quantum Fourier Transform topped out at a 70-percent success rate. "Our experiment brings high-quality quantum bits up to a higher level of functionality by allowing them to be programmed and reconfigured in software," says JQI's Christopher Monroe. The researchers believe eventually more qubits--perhaps as many as 100--could be added to their quantum computer module. They also say it is possible to link separate modules together.


NIST's Network-of-Things Model Builds Foundation to Help Define the Internet of Things
NIST News (07/28/16) Evelyn Brown

A new publication from the U.S. National Institute of Standards and Technology (NIST) offers a basic model to help researchers better understand the Internet of Things (IoT) and its security challenges. The Network of Things (NoT) model, created by NIST researcher Jeff Voas, is based on distributed computing, using four constituent elements--sensing, computing, communication, and actuation--or "primitives." The reliability and security of a NoT is defined by six components, including environment, cost, geographic location, owner, snapshot-in-time, and a unique device identity. The model was designed to aid scientists as they simulate simple problems, and help them understand the requirements of securing larger, more vital networks. "The vocabulary and science of the Network of Things will help researchers understand how the components of IoT interoperate, and compare the security risks and reliability tradeoffs," Voas says. STEMCorp chief researcher George Hurlburt sees the reduction of the IoT to a smaller environment as an important evolutionary step. Meanwhile, Voas is using his model to continue investigating reliability and security issues, and he encourages research into related security, reliability, pedigree, and trust issues. Voas also is exploring a scalability problem, which concerns how big data would accommodate the massive volume of sensor-generated information.


$1 Million Gift to Support Diversity in STEM Education
MIT News (08/01/16) Audrey Resutek

The Hopper-Dean Foundation has granted a two-year, $1-million gift to three science, technology, engineering, and math (STEM) education programs at the Massachusetts Institute of Technology. The programs introduce underrepresented and underserved students to engineering and computer science, and include women, students from low socioeconomic backgrounds, and racial and ethnic minorities. The gift aims to eliminate certain obstacles facing some students, such as program fees and weekend transportation. One of the recipients, the Saturday Engineering Enrichment and Discovery (SEED) Academy, is a science and engineering enrichment program for talented middle and high school students from underserved communities. Meanwhile, the Codelt program's undergraduate mentors teach middle school girls programming principles to inspire an interest in computer science. The Women's Technology Program is a residential and academic summer program for high school girls with an interest in engineering and computer science. "With the growing importance of computing and computer science across many fields of endeavor, we feel very strongly that the world's computer scientist population should reflect the world's population and diversity," say Jeffrey Dean and Heidi Hopper, the grant's sponsors. "This gift is designed to explore ways that we can all do better at bringing traditionally underrepresented groups into this important and exciting field."


Autism Genes Identified Using New Approach
Princeton University (08/01/16) John Cramer

Princeton University and Simons Foundation researchers have developed a machine-learning approach that analyzes the entire human genome to predict which genes may cause autism spectrum disorder (ASD). The approach found the number of genes that could be linked to the disorder is about 2,500, up from only 65 genes identified by other methods. The machine-learning approach uses a functional map of the brain to provide a genome-wide prediction of autism risk genes, including hundreds of candidates for which there is minimal or no prior genetic evidence. The researchers also built a user-friendly, interactive Web portal, where biomedical researchers or clinicians can access and investigate the study's results. "Our work is significant because geneticists can use our predictions to direct future sequencing studies, enabling much faster and cheaper discovery of autism genes," says Princeton researcher Arjun Krishnan. He also notes researchers can use the predictions to prioritize and interpret results of whole-genome sequencing studies of ASD. "The method we developed can, for the first time, identify ASD-associated genes even if they have not been previously linked to autism," says Princeton professor Olga Troyanskaya. "We achieve this by using a functional map of the brain (brain-specific gene network) generated by integrating thousands of genomic datasets."


Toyota Teaches Cars to Drive by Studying Human Drivers
TechRepublic (08/04/16) Hope Reese

The development of autonomous-driving capabilities and home-care robots are areas the recently created Toyota Research Institute (TRI) is exploring, and in an interview TRI's Jim Adler discusses how the automaker is teaching cars to self-drive. He notes machines are trained by example, so it logically follows to study driver behavior as a template for self-driving systems. "If you think about data, there are at least three different areas that are important," Adler says. "One is the technology, which tends to get a lot of the attention. But, there's also this data governance and...the data policies." Adler says Toyota currently is culling on-vehicle data, but he notes off-vehicle data also is an important component. "You want, at this stage, as much on-vehicle data as you can get, and you want to understand your environment as well as possible to do things like understand how you should drive and understand what the road conditions are," he says. Adler also sees similarities in the development of robots TRI is investigating in terms of the information they need and behaviors they must learn, including social cues and responses. He says Toyota's rollout of autonomous-driving features will be incremental, with driver-assist technologies coming first and accumulating and evolving until vehicles are fully autonomous.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe