Welcome to the September 1, 2017 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Please note: In observance of the U.S. Labor Day holiday, TechNews will not be published on Monday, Sept. 4. Publication will resume Wednesday, Sept. 6.

ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

To view "Headlines At A Glance," hit the link labeled "Click here to view this online" found at the top of the page in the html version. The online version now has a button at the top labeled "Show Headlines."

Woman covers her mouth with one hand Humans, Cover Your Mouths: Lip Reading Bots in the Wild
ZDNet
Robin Harris
August 30, 2017


Researchers at Oxford University in the U.K. and Google have developed an algorithm that has outperformed professional human lip readers, a breakthrough they say could lead to surveillance video systems that can show the content of speech in addition to the actions of an individual. The researchers developed the algorithm by training Google's Deep Mind neural network on thousands of hours of subtitled BBC TV videos, showing a wide range of people speaking in a variety of poses, activities, and lighting. The neural network, dubbed Watch, Listen, Attend, and Spell (WLAS), learned to transcribe videos of mouth motion to characters, using more than 100,000 sentences from the videos. By translating mouth movements into individual characters, WLAS was able to spell out words. The Oxford researchers found a professional lip reader could correctly decipher less than 25 percent of the spoken words, while the neural network was able to decipher 50 percent of the spoken words.

Full Article
Stroke Patient Improvement With a Brain-Computer Interface
University of Adelaide
August 31, 2017


Researchers at the University of Adelaide in Australia have demonstrated a brain-computer interface (BCI) that helps stroke patients improve motor function. The BCI has generated a 36-percent improvement in motor function of a stroke-damaged hand. The interface quantifies brain electrical signals on the surface of the scalp. Each time a subject envisions performing a specific motor function, the BCI captures those electrical signals and transmits them to a computer. An advanced mathematical algorithm then reads the brain signals and delivers an appropriate sensory feedback via a robotic manipulator. "Our theory is that to achieve clinical results with BCIs we need to have the right feedback to the brain at the right time; we need to provide the same feedback that we receive during natural motor learning, when we are seeing and feeling the body's movement," says Adelaide's Sam Darvishi. "We also found there should be a short delay between the brain activation and the activation of target muscles."

Full Article

Man in car without hands on wheel Fatal AI Mistakes Could Be Prevented by Having Human Teachers
New Scientist
Matt Reynolds
August 30, 2017


Researchers at Oxford University in the U.K. have found with the proper human oversight, it might be possible to create artificial intelligence (AI) systems that can learn without failing multiple times. The researchers started with a simple, two-dimensional table tennis game of "Pong." Normally, an AI system will let the ball go past its paddle a few hundred times before realizing that is not a good way of increasing its score. In the Oxford researchers' new model, a human would step in to avoid all of those repeated failures. Meanwhile, a separate AI watched as the human intervened in the game. This AI watched the human for 4.5 hours and then was able to mimic the human overseer and prevent the Pong-playing AI from making serious errors. The study demonstrates that, given the right circumstances, it is possible to train an AI so it learns a task without experiencing a serious failure.

Full Article
Software and IT Top List of Highest-paying Industries, With Average Salaries of $105K
TechRepublic
Alison DeNisco
August 30, 2017


Software and information technology (IT) services top the list of the highest-paying industries in the U.S. with average wages of $104,700, according to a new LinkedIn report. Computer science was ranked the highest-paying field of study, with median salaries totaling $92,300 nationally. LinkedIn's Guy Berger says he sees improvement for employees across skill levels, industries, and regions, with gains in bargaining power for many workers particularly significant. In addition, LinkedIn found salaries on average rose with company headcount, but there also was variation in pay for individual tech positions, according to the size of their organizations. For example, chief technology officers earned the most at companies with 201 to 1,000 workers, while technical consultants were paid highest at companies with 1,001 to 10,000 employees. Geographically, average IT salaries are higher in San Francisco than any other U.S. city, with median total compensation across positions and industries reaching $112,400.

Full Article

Man and woman looking at computer What James Damore Got Wrong About Gender Bias in Computer Science
Wired
John Hennessy; Maria Klawe; David Patterson
September 1, 2017


Several experts rebut former Google employee James Damore's conceit that innate biological differences underlie female software engineers' underrepresentation in the tech industry. They say the existence of implicit bias has significant effects on women's observed performance, "and the more implicit gender bias a nation has, the worse its girls perform in science and math." In addition, the experts note established research and common sense dictate that members of underrepresented groups are more easily disenchanted because they face daily prejudices that others do not. "Third, many labor studies predict a dramatic shortage of software engineers over the next five years, which will limit the growth of an industry that plays a vital role in our economy," the experts say. A final point they raise is the need for face-to-face dialogue so participants can see the impact of their biases and identify flaws in their reasoning before circulating their misconceptions widely.

Full Article
NERSC Scales Scientific Deep Learning to 15 Petaflops
HPCwire
Rob Farber
August 28, 2017


Researchers at the U.S. Department of Energy's National Energy Research Scientific Computing Center (NERSC), Intel, and Stanford University have collaborated to develop the first 15-petaflops deep-learning software running on high-performance computing platforms, which they say represents the most scalable deep-learning implementation in existence. The researchers say a Cray XC40 system with 9,600 self-hosted 1.4 GHz Intel Xeon Phi Processor 7250-based nodes realized a peak rate between 11.73 and 15.07 petaflops, single-precision, and an average sustained performance of 11.41 to 13.47 petaflops when training on physics and climate-based datasets from NERSC's Cori Phase-II supercomputer. The team combined Intel Caffe, Intel Math Kernel Library, and Intel Machine Learning Scaling Library software to reach this milestone. "These were not just a set of heroic runs, they have solved real problems at the scale of a top-five supercomputer using new methods," says Intel's Joe Curley.

Full Article

MIT CSAIL’s robot with ComText demonstration.  Robot Learns to Follow Orders Like Alexa
MIT News
Adam Conner-Simons
August 30, 2017


Researchers at the Massachusetts Institute of Technology's (MIT) Computer Science and Artificial Intelligence Laboratory have developed ComText, a system that enables robots to understand contextual commands. Most robot learning strategies focus on semantic memory, while ComText is capable of observing a range of visuals and natural language to gather "episodic memory" about an object's size, shape, position, type, and even whether it is owned by someone. The researchers say from this knowledge base ComText can reason, deduce meaning, and respond to commands. "The main contribution is this idea that robots should have different kinds of memory, just like people," says MIT's Andrei Barbu. "We have the first mathematical formulation to address this issue, and we're exploring how these two types of memory play and work off of each other." Tests of the system on the Baxter robot were about 90-percent successful, and future plans include helping robots understand more complex information, such as multi-step instructions.

Full Article

The two-dimensional tabletop earthquake simulator viewed through a polarized camera lens. Machine-Learning Earthquake Prediction in Lab Shows Promise
Los Alamos National Laboratory News
Nancy Ambrosiano
August 30, 2017


Researchers at Los Alamos National Laboratory have found a machine-learning system listening to the acoustic signal emitted by a laboratory-created earthquake can predict the time remaining before the fault fails. "The novelty of our work is the use of machine learning to discover and understand new physics of failure, through examination of the recorded auditory signal from the experimental setup," says Los Alamos researcher Paul Johnson. The team says the new research has potential significance for earthquake forecasting, as well as for other failure scenarios including nondestructive testing of industrial materials. The new machine-learning technique identifies new signals, previously thought to be low-amplitude noise, that provide forecasting information about the earthquake cycle. The researchers analyzed data from a laboratory fault system that contains fault gouge, the ground-up material created by the stone blocks sliding past one another. They note an accelerometer then recorded the acoustic emission emanating from the shearing layers.

Full Article
Professor, Experts Aim to Improve Emergency Response
The Battalion (TX)
Emmy Bost
August 30, 2017


Researchers at Texas A&M University have developed a system to improve the transmission of critical information for emergency responders. The researchers are focused on the development of DistressNet-NG, the second generation of DistressNet, which was originally developed in 2011. The new version will enable responders to access broadband communication and data, where they will be able to send and receive video streams and communicate in situations where there is no communication infrastructure. If the new technology is determined to be viable, then the researchers will involve first responders and the public safety community to test network viability out in the field. "What this research, and research like it, will do is help us identify ways in which we can get that to the point of consumption, whether it's in the fire truck or the police car," notes Jason Moats, associate division director for the Emergency Services Training Institute with the Texas A&M Engineering Extension Service.

Full Article
Earning a Degree to Go to Camp
Inside Higher Ed
Quinn Burke; Louise Ann Lyon; James Bowring
August 29, 2017


Although coding boot camps often are advertised as a substitute for four-year degree programs, new research implies this is frequently not the case and that such programs increasingly are serving as a supplement to college degrees, according to three researchers at the College of Charleston and the non-profit Education, Training, and Research. "Our research suggests that while coding boot camps are hardly learning environments equivalent to an alternative to a four-year degree, neither are they the egalitarian learning environments open to all comers that many purport to be in their advertising," the researchers note. They also say their early research indicates the leading coding boot camps are just as competitive, if not more so, than national undergraduate and post-graduate programs in computer science. The boot camps were found to be four times more likely than colleges and universities to focus on student recruitment and admission and almost seven times more likely to focus on the learning profiles of their students.

Full Article
Computer Algorithm Links Facial Masculinity to Autism
University of Western Australia
Jess Reid
August 25, 2017


Researchers at the University of Western Australia (UWA) have found a link between masculine facial features and autism using three-dimensional (3D) photogrammetry. The research was designed to examine whether prepubescent boys and girls with an Autism Spectrum Disorder (ASD) exhibited more masculine features compared to those without the condition. Although genetic factors are known to play a major role in ASD, there is growing evidence that hormonal factors also influence the condition's development. The UWA researchers designed an algorithm to generate a gender score for a sample of 3D facial images to create a scale ranging from very masculine to very feminine. The researchers found increased facial masculinity correlated with more social communication difficulties as measured on the Autism Diagnostic Observation Scale. "In the long run, we hope to further explore the possibility for 3D facial images to be used as a complementary diagnostic tool to aid in the early identification of ASD," says UWA's Diana Tan.

Full Article
ORNL Researchers Turn to Deep Learning to Solve Science's Big Data Problem
Oak Ridge National Laboratory
Scott Jones
August 25, 2017


Researchers at Oak Ridge National Laboratory (ORNL) have received a three-year, $2-million grant from the U.S. Department of Energy (DoE) to study the potential of machine learning in revolutionizing scientific data analysis. The DoE grant will fund the Advances in Machine Learning to Improve Scientific Discovery at Exascale and Beyond project, whose goal is to use deep learning to help researchers understand massive datasets produced at the world's most sophisticated scientific facilities. The ORNL researchers are seeking to revolutionize current analysis paradigms by using deep learning to identify patterns in scientific data and alert scientists to potential new discoveries. The ORNL team plans to develop a deep-learning network that can decipher data from hundreds of thousands of inputs. "We revealed new capabilities not feasible with conventional computing architectures," says ORNL's Thomas Potok. "It potentially allows us to solve very complicated problems unsolvable with current computing technologies."

Full Article
Blossom: A Handmade Approach to Social Robotics From Cornell and Google
IEEE Spectrum
Evan Ackerman
August 23, 2017


Researchers at Cornell University and Google are developing Blossom, a social robot built from natural materials such as cotton and wool. Cornell's Guy Hoffman says Blossom features a flexible internal structure to give the robot a more imperfect organic feel and personality. Hoffman notes the internal softness is made possible by striking a balance between soft actuators and customizable mechanisms. Blossom's aesthetic is a response to the sleek design of current social robots, and it is geared to rekindle user appreciation for more traditional handcrafted artistry amidst an increasingly digital culture. "This also makes the robot more deeply personal," Hoffman notes. The decision to omit any facial features from the robot was made to avoid the impression that it knows all about the user. Google's Miguel de Andres-Clavera says Blossom was conceived "to provide developers with a platform they can use to create smart social companions."

Full Article
2017 ACM Europe Conference
 
ACM Publications
 

Association for Computing Machinery

2 Penn Plaza, Suite 701
New York, NY 10121-0701
1-800-342-6626
(U.S./Canada)



ACM Media Sales

If you are interested in advertising in ACM TechNews or other ACM publications, please contact ACM Media Sales or (212) 626-0686, or visit ACM Media for more information.

To submit feedback about ACM TechNews, contact: [email protected]