Welcome to the July 7, 2017 edition of ACM TechNews, providing timely information for IT professionals three times a week.

ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

To view "Headlines At A Glance," hit the link labeled "Click here to view this online" found at the top of the page in the html version. The online version now has a button at the top labeled "Show Headlines."

Metallic key in keyhole on blue digital code background What's Challenging in Big Data Now: Integration and Privacy
Alex Woodie
July 5, 2017

For the 50th anniversary of the A.M. Turing Award, ACM gathered computing experts such as Massachusetts Institute of Technology professor and 2014 Turing Award recipient Michael Stonebraker to discuss modern challenges surrounding big data. Noting data volume and data velocity challenges have basically been addressed, Stonebraker cited the data variety challenge as still unmet. Meanwhile, Columbia University professor and ACM-Infosys 2013 Foundation Award recipient David Blei said, "Data scientists are looking to answer how we take data that we observe from the world and use it to identify causal connections between two variables." Data science also is riddled with moral and privacy issues, with Stonebraker noting these must be solved if big data is to significantly benefit society. Blei stressed the need to strike the right balance between making data public and respecting people's privacy, and said the challenge ultimately revolves around "how public is too public."

Full Article
Tracking Humans in 3D With Off-the-Shelf Webcams
Saarland University
July 5, 2017

Researchers at the Max Planck Institute for Computer Science in Germany have developed VNect, a system for capturing human movements digitally in three dimensions (3D) in real time using a single video camera. The system also can estimate the 3D pose of a person acting in a pre-recorded video, offering new applications in character control, virtual reality, and ubiquitous motion capture with smartphones. VNect is based on a convolutional neural network that can calculate the 3D pose of a person from the two-dimensional information of the video streams. The new system avoids wasting computations on image regions that do not contain a person. The neural network was trained using tens of thousands of annotated images during the machine learning process. Although the accuracy of the pose estimation is slightly lower than the accuracy obtained with multi-camera or marker-based pose estimation, the researchers believe the technology will further mature and be able to handle increasingly more complex scenes.

Full Article

Headphone hanging on wooden wall Sculpting Sound With New 3D Audio Tools
July 5, 2017

The European Union-funded Binaural Tools for the Creative Industries project was created to deliver three-dimensional (3D) audio tools to enhance the user experience via the development of next-generation media content, for platforms such as videogames and virtual reality. The project offers an integrated software and hardware approach to improve the production, post-production, and distribution of audio content. The new technology will enable more extensive control over the movement and position of sounds, melodies, and rhythms, which could potentially expand the listener's musical experiences. For example, 3D audio tools let artists modify the relative position of instruments, giving users a range of acoustic effects, facilitating emphasis of certain instruments. The researchers say their initial goal was to design an intuitive application for producers who are less inclined toward technical, and more toward creative, skill sets. The team developed a tool that is of high technical quality, and that will improve workflows, with intuitive usability, while retaining overall cost-effectiveness.

Full Article
IBM Develops Low-Level Task Automation
EET India
July 7, 2017

IBM researchers have completed the initial phase of the One Button Machine project, whose goal is to automate the feature engineering step of a data science project by computing aggregate features that can be used as input for machine-learning models. The One Button Machine traverses the graph defined by the entities and relations of a relational database. The aggregation functions can be specified by the user, or selected generically for certain data types. To manage the explosion of related entities, the Machine implements heuristics and sub-sampling strategies, while scalability to big databases is realized via dynamic caching of intermediate results and a parallelizable deployment in Apache Spark. The researchers have applied the Machine in data science competitions, where it outperformed most human teams. The One Button Machine generates results in hours, whereas if the features had to be manually engineered, it would take days or even weeks to attain the same levels of accuracy.

Full Article

A badger on grass EU Developing Robot Badgers for Underground Excavation
IEEE Spectrum
Evan Ackerman
July 5, 2017

The European Union is underwriting the roBot for Autonomous unDerGround trenchless opERations, mapping, and navigation (BADGER) project, which will develop a "robotic system that will be able to drill, maneuver, localize, map, and navigate in the underground space, and which will be equipped with tools for constructing horizontal and vertical networks of stable bores and pipelines." The BADGER device will be capable of autonomously burrowing underground to create channels for pipes, navigating around existing infrastructure in the process. BADGER also will be able to three-dimensionally-print conduit as it goes along. BADGER's drilling mechanism will integrate rotary and impact drilling technologies, in addition to "a novel...ultrasonic drill-tool" designed to "foster pulverization of the rock" which the robot will siphon and flush out through its rear to keep the passage unimpeded. The machine will use bio-inspired peristaltic motion for propulsion, and radar antennas, lasers, and navigation systems will keep BADGER on course and enable it to dodge obstacles.

Full Article
Task Force Finds Open Data's 2 Biggest Challenges Are Finding It and Using It When You Do Find It
University of Warwick
June 30, 2017

Key challenges to effectively using open research data include researchers' inability to find data even when it is notionally accessible, or to use it if they do find it due to format variations and other compatibility issues, according to the first report of the Open Research Data Task Force. Other issues raised in the report concern software; data quality; automation; security; and selection, storage, and preservation. The Task Force also has identified opportunities and areas that could help address these issues, including scholarly journals leading adoption and implementation of appropriate data policies, and rectifying skills gaps among researchers in many fields, with suitable incentives. "The Task Force will now proceed to develop a full roadmap, that will aim to tackle the range of issues we have identified in our first report, and how that might be resourced, which will be published in 2018," says Task Force chair and University of Warwick professor Pam Thomas.

Full Article
Practical Parallelism
MIT News
Larry Hardesty
June 30, 2017

Researchers at the Massachusetts Institute of Technology's (MIT) Computer Science and Artificial Intelligence Laboratory have developed a system called Fractal that enables up to 88-fold acceleration in a parallelism strategy known as speculative execution. "What these systems do is execute...different chunks [of tasks] in parallel, and if they detect a conflict, they abort and roll back one of them," says MIT professor Daniel Sanchez. Fractal solves several inefficiencies introduced by large atomic tasks. A programmer adds a line of code to each subroutine within an atomic task that can be performed in parallel, usually enlarging the length of the serial version of a program by a few percent, whereas an implementation that explicitly synchronizes parallel tasks will often increase it by up to 400 percent. Circuits hardwired into the Fractal chip then manage the parallelization. Fractal also assigns a time stamp to each atomic task, with task order preserved by the subroutine order.

Full Article
Disney Research, Pixar Animation Studios, and UCSB Accelerate Rendering With AI
James Badham
June 30, 2017

Researchers at the University of California, Santa Barbara, Disney Research, and Pixar Animation Studios have developed a new technology based on artificial intelligence and deep learning that eliminates visual inaccuracies and enables production-quality rendering of images at much faster speeds. The team used millions of examples from the movie "Finding Dory" to train a convolutional neural network to transform images into noise-free images that resemble those computed with significantly more light rays. Following the training, the system was able to remove noise on test images from entirely different films even though they had completely different styles and color palettes. The research marks a significant step forward over previous state-of-the-art de-noising methods, which often left artifacts or residual noise. "This new technology allows us to automatically remove the noise while preserving the detail in our scenes," says Pixar's Tony DeRose. The research will be presented this later month at the ACM SIGGRAPH 2017 conference in Los Angeles.

Full Article
ASU Part of 6-University Initiative to Defend Against Cyberattacks
Pete Zrioka
June 30, 2017

Researchers at Arizona State University (ASU), along with colleagues from five other universities, are working on the Realizing Cyber Inception: Towards a Science of Personalized Deception for Cyber Defense project, whose goal is to conduct research on defending against cyberattacks by profiling the attackers. The work is supported by a $6.2-million award granted to the six partnering universities by the U.S. Army Research Office. The researchers will use the Cyber Defense EXercises for Team Awareness Research simulator to recreate cyberattack and defense scenarios. The team says the resulting data will be used to create cognitive models of decision-making by attackers. The cognitive models will be paired with a mathematical framework to develop examples of multilayered environments that can monitor cyberattacks. "Instead of a using a generalized honeypot, we specialize the offense against them, creating an environment in which they don't know what's real and what's not," says ASU professor Nancy Cooke.

Full Article

An ocean wave Making Waves
IST Austria
June 29, 2017

Researchers at the Institute of Science and Technology Austria (IST Austria) have introduced a novel representation of waves that improves computational efficiency by at least an order of magnitude. The researchers use wave packet theory to develop realistic and detailed water wave simulations in real time. The team says each wave packet contains a collection of similar wavelengths, and larger wave formations are created by adding individual packets together. The method has never before been applied to computer graphics, and the team has developed a simulation that is more versatile and physically plausible than previous methods. In addition, the new method is largely independent of time-steps and does not rely on a computational grid, enabling the users to look very far into the future or the past of the simulation, and examine the waves very closely. The research will be presented later this month at the ACM SIGGRAPH 2017 conference in Los Angeles.

Full Article
Graphene and Terahertz Waves Could Lead the Way to Future Communication
Chalmers University of Technology
June 28, 2017

Researchers at the Chalmers University of Technology in Sweden say they have moved one step closer to a possible paradigm shift for the electronics industry. The researchers want to use graphene and terahertz waves in electronics in order to improve future data traffic. Graphene enables electrons to move much faster than in most conventional semiconductors, and this permits developers to access frequencies 100 to 1,000 times higher than gigahertz, constituting the terahertz range. "Data communication then has the potential of becoming up to 10 times faster and can transmit much larger amounts of data than is currently possible," says Chalmers' Andrei Vorobiev. The researchers have shown graphene-based transistor devices could receive and convert terahertz waves. The team is currently working to replace the silicon base on which the graphene is mounted, which limits the performance of the graphene, with other two-dimensional materials that can offset these limitations and enhance the effect.

Full Article

The Colosseum at sunrise Thousands of Rome's Historical Images Digitized With Help of Stanford Researchers
Stanford News
Alex Shashkevich
June 29, 2017

Researchers at Stanford University have contributed to the creation of a digital visual archive charting Rome's evolution over the centuries. The archive includes nearly 4,000 digitized drawings, prints, photographs, and sketches of historic Rome from the 16th to 20th centuries. The archive was amassed over two years as a joint effort between Stanford's Center for Spatial and Textual Analysis, the Stanford University Libraries, the University of Oregon, Dartmouth College, and the Italian government. The collaborators scanned and generated high-resolution images of each of the thousands of materials collected by Roman archaeologist Rodolfo Lanciani. Every digital object was categorized and linked to a descriptive set of data for proper online storage and searching. The digital images and all associated descriptions are now permanently maintained in the Stanford Digital Repository. The effort is part of a larger project to reconstruct the spatial history of Rome, whose goal is an interactive map connected to the digitized archival materials.

Full Article
On Computer Science: a Turbo in the Algorithm
The Conversation
Serge Abiteboul; Christine Froidevaux
July 4, 2017

In an interview, Claude Berrou, a professor at France's IMT Atlantique, describes computer science as "to the sciences what natural language is to intelligence." Berrou notes his invention of turbo codes was driven by the desire to reduce the effect of noise and handle errors in transmission. "I thought of introducing the principle of negative feedback in the decoding process," Berrou says. He notes the concept involved importing an electronics method into computer science, starting with the Viterbi algorithm that enables the correction of transmission errors via a noisy channel, and can be thought of as a signal-to-noise ratio amplifier. "To protect a message, we add redundancy," Berrou says. "The turbo code performs the coding in two dimensions." Berrou cites the work of American mathematician Claude Shannon as demonstrating that all ideal transmissions should be achieved using two fundamental operations: message compression to eliminate the maximum amount of unnecessary redundancy, and the addition of intelligent redundancy to protect against errors.

Full Article
Here's to Adele, Dan and Alan for giving us Smalltalk
ACM Distinguished Speakers Program

Association for Computing Machinery

2 Penn Plaza, Suite 701
New York, NY 10121-0701

ACM Media Sales

If you are interested in advertising in ACM TechNews or other ACM publications, please contact ACM Media Sales or (212) 626-0686, or visit ACM Media for more information.

To submit feedback about ACM TechNews, contact: [email protected]