Association for Computing Machinery
Welcome to the May 11, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).


Computer Science Teachers Need Cybersecurity Education, Says CSTA Industry Group
TechRepublic (05/10/16) Evan Koblentz

ACM's Computer Science Teachers Association (CTSA) is crafting a cybersecurity certification program for computer science teachers to provide tomorrow's workforce with vital knowledge and training. CSTA executive director Mark Nelson says nearly 90 percent of middle school and high school educators who teach computer science lack computer science degrees. This month, the group announced an eight-hour cybersecurity education certificate course, with a curriculum co-developed by CompTIA that covers authentication, best practices, compliance, encryption, governance, penetration testing, risk management, and security architecture. Teachers also must complete online cybersecurity career simulations and lead students in real-life mentoring before receiving the certificate. In addition, Nelson says CTSA will team with instructional video maker LifeJourney on further cybersecurity education. Another goal is teaching gender, geographic, and industry diversity. Similar educational initiatives are underway via the U.S. Department of Homeland Security's National Initiative for Cybersecurity Careers and Studies and the National Institute of Standards and Technology's National Initiative for Cybersecurity Education. However, the CSTA program stands out by being developed directly by K-12 teachers themselves.

How Will People Interact with Technology in the Future?
University of Bristol News (05/09/16)

Researchers from Bristol University's Bristol Interaction Group (BIG) this week will present new research focusing on how people will interact with technology in the future at the ACM CHI 2016 conference in San Jose, CA. The researchers will present a study that examines sustainable interaction design, cloud services, and the digital infrastructure. It offers an analysis of the different ways in which design decisions result in environmental impacts through their use of the digital infrastructure. The BIG researchers also investigated text legibility on non-rectangular displays, and they developed PowerShake, an exploration of power as a shareable commodity between mobile and wearable devices using wireless power transfer to enable power sharing in real time. The researchers also developed EMPress, a practical hand-gesture classification system with wrist-mounted electromyography and pressure sensing. The EMPress technique senses both finger movements and the rotations around the wrist and forearm, covering a wide range of gestures. The BIG researchers also will present a paper exploring the importance of language for the design of smart home technologies for healthcare. "The body of research we are presenting shows that human-computer interfaces have an important role to play in how people will interact and use technology in the future," says BIG's Anne Roudaut.

ACM Awards 2016 Godel Prize to Inventors of Concurrent Separation Logic
Inside HPC (05/09/16) Rich Brueckner

ACM's Special Interest Group on Algorithms and Computation Theory (SIGACT) and the European Association for Theoretical Computer Science on Monday awarded the creators of Concurrent Separation Logic (CSL) the 2016 Godel Prize. Carnegie Mellon University professor Stephen Brookes and Facebook engineering manager Peter W. O'Hearn presented the concept of CSL in separate papers, with O'Hearn's work concentrating on fluency with logic, while Brookes' focus was a demonstration of the logic's soundness using the CSL model. CSL has been the basis for nearly all research papers developing theoretical concurrent program logics in the last 10 years. The papers include research on permissions, refinement, and atomicity; on adaptations to assembly languages and weak memory models; on higher-order variants, and on the logics for termination of concurrent programs. In terms of practicality, CSM bears a close resemblance to the programming idioms often used by working engineers. Proofs are significantly simplified because the logic matches the common programming idioms. CSL's simple organization and structure also enable automation, so many tools and methods in the research community use it as a foundation.

The Lack of Women in Tech Is More Than a Pipeline Problem
TechCrunch (05/10/16) Swati Mylavarapu

The dearth of women in technology runs much deeper than a simple pipeline problem, writes venture capitalist Swati Mylavarapu. Girls Who Code estimates about 74 percent of young girls express interest in science, technology, engineering, and math (STEM) fields and computer science, yet only 18 percent of undergraduate computer science degrees and 26 percent of computing jobs are held by women. Moreover, women hold only 5 percent of leadership positions in the tech industry. Rather than merely focusing on the pipeline, "A better question might be, how can we collectively work to improve women's participation in the tech industry at each key stage of their careers?" Mylavarapu suggests. Girls Who Code founder Reshma Saujani speculates 1.4 million computer science jobs will open by 2020, yet women will comprise only 3 percent of the 29 percent of jobs held by qualified graduates. To remedy this problem, Saujani says Girls Who Code is working with girls in every age demographic to inspire interest in tech careers. Its efforts include training elementary school teachers in coding, producing coding board books to bring its curriculum into classes, and offering after-school programs in middle school and summer immersion programs in college. Meanwhile, Intuit's Merline Santil advises encouraging women to pursue STEM at every stage of their lives, with social networks an important tool.

DARPA Director Clear-Eyed and Cautious on AI
Government Computer News (05/10/16) Mark Pomerleau

U.S. Defense Advanced Research Projects Agency director Arati Prabhakar says artificial intelligence (AI) is a very compelling tool, but cautions it should not be viewed as a cure-all. Prabhakar cites image analysis as an example of AI's current limitations, because although AI and machine-learning systems can outclass humans at image identification, "the problem is that when they're wrong, they are wrong in ways that no human would ever be wrong." Nevertheless, many experts and government officials advocate greater utilization of automation and intelligent systems to vastly speed up operations, as well as boosting efficiency as datasets expand exponentially. Scientists have developed cognitive systems to help humans mine large datasets and identify objects of interest, and such breakthroughs will become increasingly relevant as the federal government anticipates significantly boosting unmanned aerial system intelligence, surveillance, and reconnaissance missions in coming years. Prabhakar emphasizes big data and analytics' potential for optimizing human performance, but says, "I'm having trouble imagining a future where machines will tell us what the right thing is to do." Still, she sees AI's current limitations as opportunities to further the technology to a point where machines can "help us build causal models of what's happening in the world...and take what they've learned in one domain and use it in different domains."

IBM's Watson Is Going to Cybersecurity School
IDG News Service (05/10/16) Katherine Noyes

IBM Security has announced a new project to train its Watson artificial intelligence to tackle cybercrime in collaboration with eight universities. IBM Security's Kevin Skapinetz says threat knowledge is often concealed in unstructured sources such as blogs, research reports, and documentation. "Essentially what we're doing is training Watson not just to understand that those documents exist, but to add context and make connections between them," he says. Skapinetz notes for the past year, IBM Security experts have been educating Watson on the "language of cybersecurity" by feeding it annotated documents so it understands the definition, nature, and related indicators of threats. Starting this fall, IBM will work with students at the eight universities to feed as many as 15,000 new documents into Watson each month for a year, including threat-intelligence reports, cybercrime strategies, threat databases, and materials from IBM's X-Force research library. Watson will employ natural-language processing technology to derive meaning from the unstructured data. In addition, data-mining methods will identify outliers and graphics presentation tools will reveal links among related data points in different documents. "What we're aiming to do is take away some of the guesswork and help analysts understand more context with an always-on advisor that can help investigate and answer questions," Skapinetz says.

International Team Launches Vast Atlas of Mathematical Objects
MIT News (05/10/16)

The Massachusetts Institute of Technology (MIT) is leading a team of international researchers in developing the L-functions and Modular Forms Database (LMFDB), an online resource that provides detailed maps of previously uncharted mathematical areas. The LMFDB exposes deep relationships and provides a guide to previously unstudied fields that underlie current research in several branches of computer science, mathematics, and physics. To create the LMFDB, teams of researchers spent nearly 1,000 years of computer time on calculations. "Computations in number theory are often amenable to massive parallelization, and this allows us to scale them to the cloud," says MIT researcher Andrew Sutherland. The LMFDB also provides a Web interface that enables both experts and amateurs to navigate its contents. The LMFDB contains more than 20 million L-functions, each of which has an analogous Reimann hypothesis that is believed to govern the distribution of a wide range of more exotic mathematical objects. The Langlands program serves as a framework for the millions of connections cataloged by the LMFDB, and the exact nature of these connections is the subject of research that will be accelerated by the database.

HoloFlex: A Flexible Smartphone With a Holographic Display
IEEE Spectrum (05/05/16) Evan Ackerman

Queen's University researchers have developed the HoloFlex, a flexible smartphone equipped with a holographic lightfield display that can simultaneously project glasses-free three-dimensional (3D) images to multiple users. HoloFlex runs Android Lollipop, includes a full high-definition (HD) screen, and is powered by a 1.5GHz Qualcomm Snapdragon 810 processor with a dedicated graphical-processing unit and 2GB of random access memory. The display is based on a flexible organic light-emitting diode (FOLED) screen with a resolution of 1920 by 1080 pixels and a touch layer. On top of that is a 3D-printed flexible lens array consisting of 16,640 half-dome-shaped droplets in a 160-by-104 hexagonal matrix. Each lens projects the 12-pixel-wide circular area directly beneath it out into space, and each of those approximately 80-pixel image blocks contains information about the entire scene from a virtual camera position that is unique to the position of the lens. HoloFlex can transform software models into lightfield display-based holograms, resulting in images that have depth, exhibit motion parallax, and can be viewed from multiple perspectives by multiple users. The left side of the device is rigid, while the rest of it acts like a spring, providing passive haptic feedback for intuitive control over the z-dimension. The researchers will present the HoloFlex this week at the ACM CHI 2016 conference in San Jose, CA.

Models Reduce Traffic Mayhem
Swinburne University of Technology (Australia) (05/05/16) Lea Kivivali

Swinburne University of Technology researchers say they have developed a mathematical model that could reduce traffic congestion by combing data from existing infrastructure, remote sensors, mobile devices, and their communication systems. The "Congestion Breaker" project works with intelligent transport systems (ITS), a field of research that combines information and data from a range of sources to control traffic. The new mathematical approach uses limited and incomplete data from existing operational traffic management systems to create a predictive control framework to minimize congestion. The model optimizes the traffic flows over a finite period, accounting for the short-term demand and traffic dynamic within links of the network. The resulting algorithm considers any spillback due to a traffic jam and travel time on the road between intersections, and can produce systems that would reduce congestion significantly. The final outcome is a comprehensive traffic management framework with computational flexibility accurate enough to reflect real urban traffic networks. The framework produces a scalable algorithm that can be integrated with current operating traffic management systems to reduce congestion and make better use of the existing road network infrastructure.

Like a Fingerprint, System Noise Can Be Used to Differentiate Identical Electronic Devices (05/04/16)

Disney researchers have developed EM-ID, technology that uses the electromagnetic (EM) signals emanating from electronic devices to uniquely identify identical models. Previous research showed the EM noise emitted by most electronic devices is distinctive enough that it can be used to distinguish between general classes of objects, while the new research shows it is possible to use these signals to differentiate between objects of the same make and model. Since the EM signature of a given device is an emergent statistical property and not designed to be a unique identifier, it is possible the EM spectrums may overlap, making it difficult to identify some objects, says Disney researcher Alanson P. Sample. During testing, the researchers found they could successfully identify individual devices with 95-percent accuracy. The EM-ID system uses a low-cost software-defined radio as a reader. The EM signals are digitized and sent to a host computer, where the signals are processed to remove low-magnitude EM noise, leaving frequency peaks that typically include between 1,000 and 2,000 elements. The researchers developed a two-stage ranking process for identifying a unique device. In the first stage, the frequency distribution of the unknown device is compared with those of different categories of devices; once the device is classified, it is differentiated from other similar devices.

Can Artificial Intelligence Create the Next Wonder Material?
Scientific American (05/04/16) Nicola Nosengo

Researchers are producing libraries of hypothetical new materials via computer modeling and machine-learning methods, in the hope of greatly accelerating the speed and efficiency of materials discovery. Materials science pioneer Gerbrand Ceder was inspired by the Human Genome Project to theorize identifying a "materials genome" encoding the traits of various compounds from the atomic and electronic composition, and crystal structure, of a given material. In 2003, Ceder's research team presented a machine-learning algorithm that could extract patterns from a library of crystal structures for binary alloys and predict the most likely ground state for the new alloy. Later research expanded the algorithm and library into a system that executes calculations on known crystal structures and automatically predicts new ones. The White House-led Materials Genome Initiative gave this research credibility, and helped materials researchers fulfill their vision of an online materials properties database. There are at least three major materials databases currently in existence, sharing about 50,000 known core materials derived from the Inorganic Crystal Structure Database. The AFLOWlib database contains information on more than 1 million materials and about 100 million calculated properties, including many hypothetical materials. Ceder and others are developing machine-learning software to extract rules from established manufacturing processes to direct the synthesis of new compounds.

From Autism to Chinese, a Headset to Help You With Your Language
New Scientist (05/04/16) Anna Nowogrodzki

University of California, Irvine (UC Irvine) researchers have developed SayWAT, a system created for people with autism who want help with social interactions, and which also could be used to help with speech or anxiety problems, as well as language learning. The system gives live feedback via Google Glass to the wearer when they are speaking too loudly or in a flat tone. SayWAT uses Glass' microphone to record speech, and then displays real-time guidance on volume and tone, showing a volume icon if the user's voice is too loud and flashing the word "flat" if the user's voice does not vary in pitch. The researchers tested the system with 14 autistic adults, four of which used SayWAT to talk with non-autistic volunteers, while 10 used it at an employee training session. The feedback seemed to help the users modulate their volume, but the pitch feedback did not have the same effect. UC Irvine researcher LouAnne Boyd says this could be because pitch is more complicated and might need more specific feedback. Boyd says the technology could be adapted to work on other devices, such as a smartphone or watch that gives haptic feedback to guide speech. Live speech feedback could help other groups as well, notes University of Washington researcher Julie Kientz.

Melbourne Scientists Develop 'Electronic Skin'
CIO Australia (05/04/16) Bonnie Gardiner

Monash University researchers have developed wearable technology that can form a metallic artificial skin and become a part of the human body. The technology involves the use of biomedical sensors and was developed for the purpose of monitoring patient health. The technology is made from very thin gold thread and could be used to monitor body motion, heartbeat, and blood pressure. Monash University professor Wenlong Cheng says the electronic skin can offer applications that are difficult to achieve with rigid conventional wafer and planar circuit board technologies because of its unique capacity to integrate with soft materials and curvilinear surfaces. "One core challenge [for wearable technology] is that Mother Nature doesn't provide viable materials allowing for the design of soft electronics," Cheng says. The ultra-thin piezoresistive materials can withstand 600-percent strain, resulting in a highly stretchable and durable wearable biomedical sensor for real-time health monitoring. In addition, the researchers say the fabrication of the technology requires very little cost, is environmentally sustainable, and does not require access to a clean room environment. "With its characteristics of high-value, high-knowledge content, low-carbon footprint in its manufacturing process, [e-skin] is well positioned within the evolving Australian med-tech industry to bring significant economic opportunities," Cheng says.

Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.

To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe