Welcome to the February 27, 2017 edition of ACM TechNews, providing timely information for IT professionals three times a week.

ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

boxing gloves, illustration Computer Bots Are Like Humans, Having Fights Lasting Years
University of Oxford
February 24, 2017


Software robots designed to fix errors on Wikipedia often engage in online fights lasting years, with bots repeatedly undoing each other's edits. Researchers at the University of Oxford and the Alan Turing Institute in the U.K. studied the interactions between bots on 13 different language editions of Wikipedia over 10 years. The study concluded that bots, like humans, appear to behave differently in culturally distinct environments and can have complex interactions with each other. Bots on English Wikipedia undid another bot's edits 105 times on average, while each German Wikipedia bot reverted another bot's work only 24 times. The number of reverts is smaller for bots than for humans, but conflicts involving bots last longer and are triggered later. The researchers suggest humans can react more quickly to reverts by other editors, whereas bots will systematically crawl through articles and are restricted on the number of edits permitted.

Full Article
HPC Technique Propels Deep Learning at Scale
HPC Wire
Tiffany Trader
February 21, 2017


Baidu's Silicon Valley Artificial Intelligence Lab (SVAIL) has released a modified implementation of the ring all-reduce OpenMPI algorithm for the deep-learning community, which will enable faster training of neural networks across graphical-processing unit (GPU) nodes. Unlike the OpenMPI version, the SVAIL modification avoids making extraneous copies between the central processing unit (CPU) and the GPU. Although commonplace in high-performance computing, the technique has been underused within AI and deep learning, according to Baidu. Compared with using a single GPU, the ring all-reduce algorithm is about 31 times faster at 40 GPUs. The algorithm has enabled the SVAIL team to get linear GPU scaling up to 128 GPUs and to parallelize the training of Deep Speech 2, its speech-recognition mode. Two years after the approach was initially developed, the researchers have issued two non-proprietary implementations, one for TensorFlow and one for more general applications.

Full Article
U.S. Computing Leadership Under Threat, Says House Science Chair
Computerworld
Patrick Thibodeau
February 27, 2017


U.S. Rep. Lamar Smith (R-TX), chair of the House Science, Space, and Technology committee, believes the U.S. is being dethroned as the world leader in computing. Smith says the U.S. National Science Foundation (NSF) should redirect its research and development (R&D) funding from efforts he calls "frivolous" or "low risk," such as anthropology, into biology, physics, computer science, and engineering. Smith says the U.S.'s "pre-eminence in several fields is slipping," and particularly notes supercomputing. However, he is not suggesting the U.S. increase federal R&D spending, because he says the U.S. is now spending more than any other country. Meanwhile, defending the NSF's current R&D agenda are people such as American Anthropological Association executive director Ed Liebow, who contends NSF-funded research in this field is helping train people who are of growing value at high-tech firms.

Full Article
3D Memory Sparks New Thinking in HPC System Design
The Next Platform
Nicole Hemsoth
February 21, 2017


High-performance computing (HPC) centers will need to begin rethinking how memory is provisioned given the rise of three-dimensional (3D) memory, according to researchers at the Barcelona Supercomputer Center in Spain. Emerging 3D memories support much higher memory bandwidth, lower latency, and higher energy efficiency than traditional dual in-line memory modules (DIMMs). The researchers assessed how disruptive 3D memory will be based on two prominent HPC performance-assessment benchmarks. They say stacked memory is unlikely to meet the requirements of the High Performance Linpack benchmark. However, with low-memory footprints and performance proportional to the available memory bandwidth, the High Performance Conjugate Gradients benchmark is a better fit for memory systems based on 3D chiplets. Although replacing conventional DIMMs with 3D-stacked devices may not lead to significant performance improvements, combining the performance boost of 3D memory and the high capacity of DIMMs could be the next breakthrough in memory system design.

Full Article

AI Super Smash Bros. Challenge AI Beats Professional Players at Super Smash Bros. Video Game
New Scientist
Timothy Revell
February 24, 2017


Researchers at the Massachusetts Institute of Technology (MIT) trained an artificial intelligence (AI) system called SmashBot to play Nintendo's "Super Smash Bros. Melee" using deep-learning algorithms, and then challenged and defeated 10 highly-ranked players. Super Smash Bros. is different from other games taken on by AI systems because it is multiplayer and moves cannot be planned in advance. The researchers trained SmashBot using reinforcement learning by initially pitting it against the in-game AI. They then entered SmashBot in two tournaments with professional players. SmashBot won more battles than it lost against each of the 10 high-ranking players, who ranked from 16th to 70th in the world. SmashBot plays with a reaction speed of about 33 milliseconds, compared to more than 200 milliseconds for humans. However, the researchers want to restrict the AI's reaction time to build a system that is strategically superior when playing at human speed.

Full Article

galaxy image Neural Networks Promise Sharpest Ever Images
Royal Astronomical Society
February 22, 2017


Astronomers at the Swiss Federal Institute of Technology (ETH Zurich) in Switzerland have taught a pair of neural networks to sharpen blurry images of outer space captured by telescopes. They say a telescope's resolution is limited by the size of the mirror or lens it uses, but the new method could offer the sharpest images in optical astronomy. A neural network was taught to recognize galaxies and automatically clean up blurred images by giving it samples of clear and unclear pictures of the same galaxy. The study used a generative adversarial network, an approach that pits two neural networks against each other. Given degraded images, the pair identified and sharpened features such as star-forming regions and dust lanes. The neural networks also were more accurate than other clean-up systems. The astronomers say the results open up the possibility that new discoveries could be found in old data captured by telescopes.

Full Article
Girls Outperform Boys When Computer Science Is on Curriculum
Silicon Republic
John Kennedy
February 21, 2017


A recent report commissioned by the National Council for Curriculum and Assessment (NCCA) in Ireland found girls on average receive better grades than their male counterparts when computer science is part of the school curriculum. The report aims to provide advice on the best methods for implementing a course for upper-level secondary school students. In addition, the report stresses the importance of teacher professional development in order to ensure the adoption and sustainability of such a curriculum. The NCCA report focused on the experiences of those implementing computer science courses in the U.K., Scotland, New Zealand, Canada, and Israel. If taught well, computer science educates students in problem solving, innovation, and creativity, notes Irish Education Minister Richard Bruton. In addition, the report says computer science boosts career opportunities, as students with an understanding of computer science are in demand across a wide range of fields.

Full Article
New Resource for Optical Chips
MIT News
Larry Hardesty
February 20, 2017


Researchers at the Massachusetts Institute of Technology (MIT) demonstrated that silicon optical devices can reproduce physical phenomena used by high-end telecommunications optoelectronic components. The components exploit second-order nonlinearities, making optical signal processing more efficient and robust. The researchers used the new silicon-photonics method to create prototypes of a modulator and frequency doubler. Existing silicon modulators are doped, meaning they have impurities causing free-carrier electrons to concentrate at the center of the modulator, absorbing light and diminishing the optical signal's strength. The modulator is undoped, so its free carriers help produce an electric field to modulate the optical signal much faster than existing silicon modulators. Prototypes of this modulator have recorded speeds competitive with those of the nonlinear modulators found in telecom networks. "Applying a simple electric field creates the same basic crystal polarization vector that other researchers have worked hard to create by far more complicated means," says IBM Research's Jason Orcutt.

Full Article
Stanford Researchers Create a High-Performance, Low-Energy Artificial Synapse for Neural Network Computing
Stanford News
Taylor Kubota
February 20, 2017


A team of researchers at Stanford University and Sandia National Laboratories has created an artificial synapse that mimics the way real synapses in the brain learn information from the signals they receive. Whereas traditional computing involves separately processing information and then storing it in memory, this device creates the memory by processing. The artificial synapse is based on a battery design. Three terminals are spaced across two flexible films, and connected by an electrolyte of saltwater. The synapse then works as a transistor, with the flow of electricity between two terminals controlled by the remaining terminal. A simulated array of artificial synapses was able to recognize handwritten numbers with 93-percent to 97-percent accuracy. The researchers say this technology eventually could be used to create a computer that can better imitate the way a human brain processes auditory and visual signals.

Full Article

Tricky Landing Tricky Landing
UC Magazine
Michael Miller
February 20, 2017


University of Cincinnati (UC) researchers are using a U.S. National Science Foundation grant to test how fuzzy logic helps autonomous aerial drones overcome the difficulties in landing on a moving platform. "In linguistic terms, we say large, medium, and small rather than defining exact sets," says UC professor Manish Kumar. "We want to translate this kind of fuzzy reasoning used in humans to control systems." Fuzzy logic, which the researchers have renamed "genetic-fuzzy" because the system evolves over time and continuously discards lesser solutions, helps the drone make good navigational decisions. The UC researchers utilized genetic-fuzzy logic in a simulation to show it is a perfect system for navigating under dynamic conditions. "Compared to other state-of-the-art techniques of adaptive thinking and deep learning, our approach appears to possess several advantages," says UC professor Kelly Cohen. "Genetic fuzzy is scalable, adaptable, and very robust."

Full Article
Smartphones Are Revolutionizing Medicine
Agence France-Presse
Jean-Louis Santini
February 18, 2017


Researchers say smartphone add-ons and applications are turning smartphones into revolutionary medical tools. University of Washington professor Shwetak Patel notes asthma and other pulmonary illnesses can be diagnosed with smartphone microphones, while the camera and flash on a mobile phone could be employed to diagnose blood disorders. "With these enabling technologies you can manage chronic diseases outside of the clinic and with a non-invasive clinical tool," Patel says. He also says smartphones' image motion sensor can detect resonances produced when a user taps on their elbow, which can help diagnose osteoporosis. Patel says these innovations can encourage better self-care by patients, especially in developing countries and in individuals faced with conditions such as diabetes and cancer. "The pervasiveness of the adoption of mobile platforms is quite encouraging for grappling with pervasive socio-economic determinants in terms of healthcare disparities," says Georgia Institute of Technology professor Elizabeth Mynatt.

Full Article

Researchers Design Facial Recognition System for Lemurs Researchers Design Facial Recognition System for Lemurs
UA News (AZ)
Alexis Blue
February 17, 2017


Researchers at the University of Arizona (UA) worked with collaborators at Michigan State University and Hunter College to develop LemurFaceID, a computer-assisted facial-recognition system designed to identify lemurs in the wild. LemurFaceID will enable anthropologists to construct a database of photos of lemurs in Madagascar, which they can then use to identify individual animals and verify field observations. The technology also could ease collaboration, information-sharing, and more integrated research among scientists. LemurFaceID is based on precepts of human face-recognition systems, identifying elements such as hair and skin patterns and yielding 98.7-percent accuracy when provided two facial images of the animal. The research team tested the system with a dataset of 462 images of 80 individual red-bellied lemurs and a database of 190 images of other lemurs. UA professor Stacey Tecot says LemurFaceID could help conduct a population census, and possibly be extended to an educational smartphone application for tourists.

Full Article
ACM Digital Library
 
ACM Distinguished Speakers Program
 

Association for Computing Machinery

2 Penn Plaza, Suite 701
New York, NY 10121-0701
1-800-342-6626
(U.S./Canada)



ACM Media Sales

If you are interested in advertising in ACM TechNews or other ACM publications, please contact ACM Media Sales or (212) 626-0686, or visit ACM Media for more information.

To submit feedback about ACM TechNews, contact: [email protected]