Association for Computing Machinery
Welcome to the May 2, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


It's the Year 2020...How's Your Cybersecurity?
The Conversation (04/28/16) Steven Weber; Betsy Cooper

With the possibility of wearable networked devices becoming common by 2020 comes the implication cybercriminals and other malefactors could find new ways to exploit the emotional and mental data those devices collect, forcing a redefinition of cybersecurity, write Steven Weber and Betsy Cooper with the University of California, Berkeley's Center for Long-Term Cybersecurity. This is one of several potential scenarios raised in "Cybersecurity Futures 2020," a new report that lays out five cybersecurity threat scenarios. Other factors cited in the report include the evolution and proliferation of biosensing technologies, and the intersection of virtual reality, sentiment analysis, and other "sensory" technologies with marketing, politics, and the workforce. Weber and Cooper imagine a future in which the advertising-driven business model for major Internet companies has imploded, spurring a race among both criminals and companies to acquire underpriced but potentially valuable data holdings. Noting criminals target both the datasets and the people who work on them, experts speculate on the options government may have to prevent certain datasets from exploitation, and the new systems or standards that could come about to authenticate data's legitimacy or provenance. Weber and Cooper also envision a scenario in which predictive algorithms anticipate individual behavior with more precision, encouraging new attack methods.


ASC16 Student Supercomputer Challenge Results Are In
HPC Wire (04/28/16) Tiffany Trader

The winner of the overall championship prize and the e-Prize in the Asia Supercomputer Community's 2016 Student Supercomputer Challenge was the Huazhong University of Science and Technology team. The researchers earned the e-Prize for optimizing a deep neural network (DNN) program to create an extremely accurate training model for about 600,000 speech data segments in English, Chinese Mandarin, and the Sichuan Dialect, improving computing performance by a factor of 108. The optimization was conducted on eight nodes of the Tienhe-2 supercomputer, with the participation of Intel Xeon Phi co-processors being a unique aspect of the contest. Some of the student teams have access to Phi hardware at their base institution or a sister organization, while others were able to buy a node to experiment with, and others were using them for the first time at the competition. Students were able to practice the DNN challenge with a sample dataset of 15,000 segments of speech data. Other apps the competition mandated included the benchmark standard High Performance Conjugate Gradients, a surface wave numerical model MASNUM, and a material modeling software ABINIT. The "mystery app" was ABySS, a de novo, parallel sequence assembler designed for short paired-end reads and large genomes.


A Theory Explains Why Gaming on Touchscreens Is Clumsy
Aalto University (04/28/16)

A new theory of computer input revealing insights into why touchscreen gaming is so awkward has been proposed by Aalto University researchers. A team led by Aalto's Byungjoo Lee performed experiments in which participants were asked to tap a display when a target would appear, and the data uncovered substantial differences between physical keys and touchscreens in how reliably users could time their presses. "We found a systematic pattern in timing performance that we could capture mathematically," Lee says. In their theory, the researchers cite three sources of error that make timing difficult with touchscreens: the inability for people to keep the finger at a constant distance above the surface, the difficulty for the neural system to predict when the input event has been registered when the finger makes contact with the surface, and the need to process the event in the application, which creates more latency. The theory suggests users' performance can be enhanced by making touch events more predictable, and registering the event when the finger's contact area on the surface is the maximum can help improve timing performance. However, Aalto professor Antti Oulasvirta says because the finger travel distance varies and creates unpredictability, the implication is touchscreens will always be inferior to physical keys in gaming. The research will be presented this month at the ACM CHI 2016 conference in San Jose, CA.


NIST Looks to Reengineer Thinking About Cyber
Federal Computer Week (04/27/16) Mark Rockwell

The U.S. National Institute of Standards and Technology (NIST) is planning to release an overhauled systems security engineering document designed to change the way software and computer designers think about cybersecurity. The updated draft of NIST's 800-160 document will be released for public comment on May 4. It is important to build systems that have the capability to limit cyberattackers' ability to penetrate or move around, and to engineer those features into technology from the beginning, according to NIST researcher Ron Ross. The document has been overhauled from its two-year-old original draft, with the new iteration taking a more holistic approach to cyberdefense by incorporating International Organization for Standardization systems engineering standards, including 30 different processes designed to build security capabilities into products, services, and systems. The new draft, which Ross expects to finalize by the end of the year with input from the private sector and all levels of government, begins by recommending systems be designed with initial input from users. He says such input can bring more information to bear on precisely what kinds of access are needed by which users. Ross says there will be a two-month comment period on the document, and possibly a second draft in the fall, before the final version is completed at the end of the year.


Social Media Interaction Tools Might Make MOOCs Stickier
Penn State News (04/27/16) Matt Swayne

Pennsylvania State University (PSU) researchers conducted a study comparing massive online open course (MOOC) student use of a course's Facebook groups to utilization of built-in course message boards and forums. The researchers found students were more engaged on the Facebook groups and also admitted preferring interacting more on Facebook than through the course tool. "In this study, we are finding that social media tools may be one way to keep students engaged in a MOOC," says former PSU doctoral student and current Microsoft research scientist Saijing Zheng. The researchers, who reported their findings last week at the ACM Learning at Scale (L@S) conference in Edinburgh, Scotland, suggest Facebook's interface has several features most MOOC courses cannot match. "Current MOOC platforms do not include collaborative features for students to work together, or good conversation channels between students and between students and teachers," Zheng says. Another advantage provided by Facebook is users tend to sign up with their real names while students can create fake personas on course message boards and forums. As part of the study, the researchers collected data from three different courses on Coursera and from Facebook groups. Although the Facebook groups had fewer members than the actual course sites, less than 10 percent of Coursera users posted content, compared to 28 percent of Facebook users.


Maiden Voyage of Stanford's Humanoid Robotic Diver Recovers Treasures From King Louis XIV's Wrecked Flagship
Stanford Report (04/27/16) Bjorn Carey

Stanford University researchers are calling the maiden voyage of the OceanOne humanoid diving robot a significant astonishing success. The robot swam through the wreck of "La Lune," the flagship of King Louis XIV, which sank 100 meters below the Mediterranean in 1664, about 20 miles off the southern coast of France, and recovered its treasures and artifacts. OceanOne looks something like a robot-mermaid. It measures about five feet long from end to end, with a torso featuring a head with stereoscopic vision that shows its pilot what it sees, two fully articulated arms, and a "tail" section that houses batteries, computers, and eight multi-directional thrusters. The robot is outfitted with human vision, haptic force feedback, and an artificial brain. It can dive alongside humans, but the intent is to have humans dive virtually to put them out of harm's way. The human pilot would feel exactly what the robot is doing. "OceanOne will be your avatar," says Stanford professor Oussama Khatib. "Having a machine that has human characteristics that can project the human diver's embodiment at depth is going to be amazing."


Inspired by Nature
The Daily of the University of Washington (04/28/16) Aleenah Ansari

University of Washington (UW) professor Luis Ceze is collaborating with Microsoft and other UW researchers to encode, store, and retrieve digital text and images using DNA molecules. The team tested this process by creating algorithms to match the digital data in DNA, ordering fabricated DNA molecules, and then sequencing and decoding the information for comparison to the original file. "There's a maximum length of a molecule you can make, and it's about 200 [to] 300 nucleotides, which is big by DNA standards, but small by computer science standards," says UW postdoctoral researcher James Bornholt. "One of the things we had to do was break data down into smaller chunks and put each one in a separate molecule and have a pool of molecules, that combined, contain a file." To address the challenge of random access in information storage/retrieval, the researchers employed polymerase chain reaction methods in which DNA fragments are placed in a liquid medium with unique primers that identify regions of interest. For every DNA-encoded digital file, they positioned identifying sequences on each end to differentiate them from the rest of the data. Ceze believes DNA could be used to archive large datasets partly because it defies obsolescence and its storage capacity is extremely dense.


AI Upgrade From MIT, Northeastern Gets NASA Robot Ready for Space
The Christian Science Monitor (04/28/16) Corey Fedde

The U.S. National Aeronautics and Space Administration (NASA) has sent its prototype R5 robots to researchers at Northeastern University and the Massachusetts Institute of Technology (MIT) to prepare the humanoid machines for deployment on space missions. The institutions were selected by NASA on the strength of their performances in the U.S. Defense Advanced Research Projects Agency's Robotics Challenge. The intent is for the robots to operate on Mars and elsewhere to build habitats, equipment, and other essentials sent years before manned missions are due to arrive. Humanoid robot technology must be significantly upgraded to be ready for implementation in the 2020s, and the Northeastern and MIT research groups will strive to advance the R5s' maintenance and deployment capabilities. Northeastern will concentrate on improving motion control, performance when grasping unknown objects, and human-robot interactions. Meanwhile, MIT's Computer Science and Artificial Intelligence Laboratory plans to enhance autonomous robotic tasks. "Our work is about vetting the robot and seeing what it is capable of," says MIT's Russ Tedrake. "If we can integrate the autonomy work with our planning and control algorithms, it could result in an unprecedented level of autonomous capabilities for a humanoid robot."


Thinking Outside the Sample
A*STAR Research (04/20/16)

Researchers at Singapore's Agency for Science, Technology, and Research (A*STAR) have developed a framework they say could help computers learn how to process and identify images faster and more accurately. The framework can be used for numerous applications, including image segmentation, motion segmentation, data clustering, hybrid system identification, and image representation, according to A*STAR researcher Peng Xi. Conventional computers process data using representation learning, which involves identifying a feature that enables the program to quickly extract relevant information from the dataset and categorize it. Supervised and unsupervised learning are two of the main methods used in representation learning. Supervised learning relies on costly labeling of data prior to processing, while unsupervised learning involves grouping or "clustering" data in a similar manner to human brains, according to Peng. Subspace clustering is a form of unsupervised learning that aims to fit each data point into a low-dimensional subspace to find an intrinsic simplicity that makes complex, real-world data tractable. "By solving the large-scale data and out-of-sample clustering problems, our method makes big-data clustering and online learning possible," Peng says. The researchers tested their new method on a range of datasets and found the framework outperformed existing algorithms and successfully reduced the computational complexity of the task while still ensuring cluster quality.


Transforming Teaching With Twitter
University of Vermont (04/25/16) Jon Reidel

University of Vermont (UVM) researchers have found Twitter can serve as a powerful teaching tool for children in the 21st century based on survey results, interviews, and classroom observations of eighth-grade students in science classes. The researchers found 95 percent of the students participating in the study agreed or strongly agreed Twitter enabled them to follow real science in real time as it develops around the world. The students especially like the ability to interact via Twitter with leading organizations and science-related programs. "[This] work adds a critical lens to the role of open social networking tools such as Twitter in the context of adolescents' learning; considers important implications for educators and school leaders in the 21st century; and raises new questions about the potential for social media as a lever for increasing the personalization of education," says UVM professor Penny Bishop. The study also found 93 percent of students surveyed think Twitter enabled them to interact and share perspectives with a global audience outside the classroom. In addition, 91 percent said Twitter helped them make connections between science and their own lives and interests, and 81 percent said Twitter helped them think creatively about new ways to communicate science.


SMU Engineering Team to Lead DARPA-Funded Research Into Holographic Imaging of Hidden Objects
SMU News (04/27/16)

The U.S. Defense Advanced Research Projects Agency (DARPA) is funding Obtaining Multipath & Non-line-of-sight Information by Sensing Coherence & Intensity with Emerging Novel Techniques, a Southern Methodist University (SMU)-led project to engineer a theoretical framework for creating computer-generated images of objects concealed by corners or walls. The central element is an algorithm that resolves the light bouncing off irregular surfaces to produce a three-dimensional holographic image of hidden objects. DARPA officials note the project seeks to overcome the limitation of conventional optical-imaging systems to light intensity measurement. "Light bouncing off the irregular surface of a wall or other non-reflective surface is scattered, which the human eye cannot image into anything intelligible," says Marc Christensen, dean of SMU's Lyle School of Engineering. "So the question becomes whether a computer can manipulate and process the light reflecting off a wall--unscrambling it to form a recognizable image--like light reflecting off a mirror." The proposal is to extend the light transport models currently used by computer graphics and vision communities based on radiance propagation to concurrently support the finite speed of light and the wave nature of light. Researchers in the disciplines of computational imaging, computer vision, signal processing, information theory, and computer graphics will work on the project.


AI, MD: How Artificial Intelligence Is Changing the Way Illness Is Diagnosed and Treated
ZDNet (04/27/16) Jo Best

Frost & Sullivan's Venkat Rajan says the concept of artificial intelligence (AI) in healthcare has "moved from nascency and pilots and proof of concepts, to more early-stage commercialization, adoption, and utilization." Rajan says rising healthcare costs and growing data volumes are driving industry interest in AI, and he notes "a lot of early [AI] solutions, are...able to take large volumes of data, put it through levels of processing that can allow some level of relevancy to crop up to support decision making, and influence the course of care." The goal is for AIs to stay updated on all aspects of every patient's visit to each specialist or hospital, along with each relevant new piece of research, disease outbreak, and public health recommendation. Not only must the AI absorb all of that information, it also must account for patient symptoms and then recommend a diagnosis or course of treatment that factors in all of those elements. The use of pattern recognition to identify patients at risk of developing a condition, or exacerbating it because of lifestyle or other variables, is another healthcare area in which AI will be applied. Meanwhile, future AI systems' natural-language processing abilities could potentially be applied to advising patients on health management.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe