Welcome to the September 6, 2017 edition of ACM TechNews, providing timely information for IT professionals three times a week.

ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

To view "Headlines At A Glance," hit the link labeled "Click here to view this online" found at the top of the page in the html version. The online version now has a button at the top labeled "Show Headlines."

Representation of a flip-flop qubit Flip-Flop Qubits: Radical New Quantum Design Invented
UNSW Newsroom
Wilson Da Silva
September 6, 2017


Researchers at the University of New South Wales (UNSW) in Australia have created a new quantum computing architecture based on "flip-flop quantum bits (qubits)," which they say could ease the large-scale manufacture of quantum chips. The new chip design permits a silicon quantum processor to be scaled up without the precise placement of atoms required in other methods, while also enabling the positioning of qubits hundreds of nanometers apart and maintaining entanglement. "What...the team has invented is a new way to define a 'spin qubit' that uses both the electron and the nucleus of the atom," says UNSW professor Andrea Morello. "Crucially, this new qubit can be controlled using electric signals, instead of magnetic ones. Electric signals are significantly easier to distribute and localize within an electronic chip." Morello says the new design is "easier to fabricate than atomic-scale devices, but still allows us to place a million qubits on a square millimeter."

Full Article

Two rows of supercomputers Supercomputers Take Big Green Leap in 2017
TOP500.org
Michael Feldman
September 5, 2017


The most environmentally-friendly supercomputers more than doubled their energy efficiency in 2017, and maintaining this trend could make exascale supercomputers running on less than 20 MW of power a possibility in several years' time, according to the latest Green500 ranking. Between June 2016 and June 2017, the average energy efficiency of the top 10 systems soared from 4.8 gigaflops/watt to 11.1 gigaflops/watt, mainly due to the deployment of supercomputers outfitted with NVIDIA's Tesla P100 graphics-processing units. Six of the top 10 supercomputers are petascale systems, and Japan's Tsubame 3.0 system is the current Green500 champion. Because these systems are based on commodity components, their technologies can be implemented across a wide range of high-performance computing applications. The latest Green500 list also names 12 systems powered by Intel's Xeon Phi processors, the most efficient of which is about 40 percent as efficient as the P100-powered TSUBAME 3.0.

Full Article
Algorithm Unlocks Smartwatches That Learn Your Every Move
University of Sussex (United Kingdom)
James Hakner
September 4, 2017


Researchers at the University of Sussex in the U.K. have developed an algorithm that enables smartwatches to detect and record their users' movements, without being told beforehand what is going to happen. The new algorithm enables smartwatches to detect activities as they occur, even those not associated with exercise, such as brushing teeth or cutting vegetables. "Here we present a new machine-learning approach that detects new human activities as they happen in real time, and which outperforms competing approaches," says University of Sussex researcher Hristijan Gjoreski. Current activity-recognition systems "cluster" bursts of activity to estimate what a user has been doing and for how long. The new algorithm tracks ongoing activity, focusing on transitioning between activities, as well as on the activity itself. "Future smartwatches will be able to better analyze and understand our activities by automatically discovering when we engage in some new type of activity," says University of Sussex researcher Daniel Roggen.

Full Article
How to Better Track the Movement of Robots
Phys.org
Yan Ou
September 1, 2017


Researchers at the Nanjing University of Aeronautics and Astronautics in China propose a method to better control the tracking of self-balancing robots. The "sliding mode control technique" pulls information from the nonlinear system, which can behave differently depending on varying factors, such as time. The algorithm then organizes the information into a representation of the robot's normal behavior. "Although different [sliding mode control] schemes have been extensively studied in the practical systems, the [sliding mode control] needs to be further developed for the self-balancing robot," says Nanjing University professor Mou Chen. For example, Chen says the dynamic information of a variable known as the unknown disturbance should be fully employed. To better understand this unknown disturbance, which could take the form of skidding or slipping, the researchers introduced a disturbance observer into the sliding mode control technique. This mathematically determines the value of an unknown disturbance, enabling the sliding mode control method to adjust and keep the robot behaving normally.

Full Article
ACM US Public Policy Council to Host Panel on Algorithmic Transparency and Accountability
CCC Blog
Helen Wright
August 28, 2017


The U.S. Public Policy Council of the ACM (USACM) on Sept. 14 will host a panel discussion, "Algorithmic Transparency and Accountability," in Washington, D.C., as a forum for dialogue between stakeholders and leading computer scientists about the expanding effect of algorithmic decision-making on society and the technical underpinnings of algorithmic models. Among the topics to be discussed are precepts presented in USACM's recent Statement on Algorithmic Transparency and Accountability, which was published jointly with the ACM Europe Council Policy Committee. The panelists also will seek cooperative opportunities for academia, government, and industry concerning these principles. Moderating the panel will be Simson L. Garfinkel, co-chair of USACM's working group on Algorithmic Transparency and Accountability. The panel also will include Northwestern University professor Nicholas Diakopoulos, Clarkson University professor Jeanna Neefe Matthews, Stroz Friedberg vice president Geoff A. Cohen, Legal Robot co-founder Dan Rubins, and Ansgar Koene with the University of Nottingham in the U.K.

Full Article
Most TV Computer Scientists Are Still White Men. Google Wants to Change That
USA Today
Jessica Guynn
September 1, 2017


Google is urging Hollywood to diversify the gender and ethnicity of computer scientists appearing on TV and film to combat a preponderance of geeky white males portraying those roles, as indicated by a new study from the University of Southern California (USC). Google's Daraiha Greene says the reinforcement of this stereotype discourages underrepresented groups from pursuing computer science (CS) careers. Google wants to demonstrate coding as a skill attainable to all by consulting with content creators on CS-related storylines for TV programs. There are indications of progress, with the USC study estimating almost 25 percent of the characters engaged in CS in shows that worked with Google were female, while none of the characters engaged in CS in a matched sample of content were female. Google also notes a new study commissioned and paid for by the search engine giant found girls who watched the first season of the Google YouTube series "Hyperlinked" are 11-percent more likely to be interested in CS careers than non-viewers.

Full Article

A chess board 'Simple' Chess Puzzle Holds Key to $1M Prize
University of St. Andrews
Fiona MacLeod
August 31, 2017


Researchers at the University of St. Andrews in the U.K. are launching a competition to find an efficient solution to the famous "Queens Puzzle," offering $1 million to the winning team. The researchers think if the puzzle can be solved, it would lead to programs capable of solving tasks currently considered impossible, such as decrypting the toughest security on the Internet. The Queens Puzzle originally challenged a player to place eight queens on a standard chessboard so no two queens could attack each other. Although the problem has been solved by human beings, if the chessboard increases in size to 1,000 squares by 1,000 squares, computer programs could no long handle the vast number of options. "If you could write a computer program that could solve the problem really fast, you could adapt it to solve many of the most important problems that affect us all daily," says St. Andrews professor Ian Gent.

Full Article
Heterogeneous Supercomputing on Japan's Most Powerful System
The Next Platform
Ken Strandberg
August 28, 2017


In an interview, Tokyo Institute of Technology professor Satoshi Matsuoka discusses how Japan's Tsubame 3 supercomputer will execute big data and artificial intelligence (AI) workloads. Matsuoka says the Intel Omni-Path Architecture-based interconnect will provide significant injection bandwidth to enable scalable machine-learning workloads, especially deep-learning training and inference workloads. "We...expect to do both simulation and analytics and AI on the same machine simultaneously," Matsuoka notes. Tsubame is not a large system in terms of cluster size, but Matsuoka says this was desirable as dictated by the need for efficiency, noting, "we want something like this to eventually make it into the cloud as the norm." Matsuoka notes Tsubame 3 functions as a hybrid system in which traditional simulation workloads and big data analytics workloads are co-located. "Co-location offers more tightly coupling of the applications, which leads to more innovative uses of machine learning and analytics, for example, using an approach called data assimilation," Matsuoka says.

Full Article
UNIST Embarks on Developing Next-Generation Artificial Intelligence
EurekAlert
Joo Hyeon Heo
August 30, 2017


The Ulsan National Institute of Science and Technology (UNIST) in South Korea has announced its selection to lead the multi-institutional development of next-generation artificial intelligence (AI), also known as next-generation learning-sequencing, as part of the National AI Strategic Project. The project was organized by the Ministry of Science and Information and Communications Technology with the objective of using AI to understand human decision-making, in a step toward the application of the technology in specialized disciplines requiring greater transparency. "The primary goal of this project is to develop AI systems that explain how they arrive at their decisions that are based on real-world data," says UNIST professor Jaesik Choi. "Through this project, our center will investigate models, algorithms, and systems for explainable AI." By securing the use of the underlying AI technology resulting from the project, the city of Ulsan will be able to set up an industrial base for the related technologies.

Full Article

Representation of neurons in the brain 'Seeing' Robot Learns Tricky Technique for Studying Brain Cells in Mammals
Imperial College London
Caroline Brogan
August 30, 2017


Researchers at Imperial College London in the U.K. have trained robots to perform a challenging brain technique called whole-cell recording (WCR). The team's software directs a robot manipulator with tiny measuring devices called micropipettes to specific neurons in the brains of live mice and records electrical currents without human assistance. "We have taught robots to 'see' the neuron and perform the procedure even better," says Imperial professor Simon Schultz. "This means WCR can now potentially be performed on a much larger scale, which could speed up our learning about the brain and its disorders." WCR remains a very difficult procedure for humans to master, and the researchers note it is an ideal candidate for automation. "We plan to commercialize the program so that research all over the world can benefit," says Imperial's Luca Annecchino. The next step for the team is to study how brain circuits are disrupted by amyloid plaques found in Alzheimer's disease.

Full Article

Woman wearing VR headset Neuroscientist Harnesses the Power of Virtual Reality to Unlock the Mysteries of Memory
UCLA Newsroom
Elaine Schmidt
August 30, 2017


Researchers at the University of California, Los Angeles are using virtual reality (VR) to study how a person's brain encodes and retrieves memories while they explore a new virtual environment. The team aims to develop therapeutic tools that could restore lost memories to people suffering from various brain disorders. In one study, the researchers equipped a patient suffering from memory loss with a motion-capture body suit and cap studded with markers to track his movement. The patient also wore a set of goggles that transported him into a virtual environment where yellow lights signaled locations for him to walk toward and remember. The researchers tested his ability to remember the route without the lighting cues, and then downloaded the recording of the patient's brain activity from a neuro-prosthetic device implanted in his brain. The team will use this data to analyze the patient's deep brain waves to measure the strength of his learning and recall.

Full Article
ALCF, NCSA Supercomputers Generate Movies of the Universe
HPCwire
Jared Sagoff; Austin Keating
August 28, 2017


Scientists have connected supercomputers at the Argonne Leadership Computing Facility (ALCF) and the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign (UI) to transfer massive datasets and run two different types of workflow. The researchers executed cosmological modeling on the ALCF's Mira supercomputer, and then relayed vast volumes of data to UI's Blue Waters supercomputer, which performed the required data analysis with its superior processing power and memory balance. The initial challenge involved implementing the transfer to support the bandwidth of one petabyte each day. On completing the first pass of data analysis, Blue Waters reduced the raw data to a more manageable size, after which it was sent to a distributed repository for further analysis. Argonne's Franck Cappello says he wants cloud-like data centers to "allow many more people to access and analyze this data, and develop a better understanding of what they're investigating."

Full Article
Artificial Intelligence Cyberattacks Are Coming--but What Does That Mean?
The Conversation
Jeremy Straub
August 27, 2017


Artificial intelligence (AI) could play a key role in the next major cyberattack, boosting the efficiency and potency of existing cyberattack strategies, according to North Dakota State University professor Jeremy Straub. However, he notes AIs launching attacks on their own remains an unlikely scenario, given that AI systems cannot interpret human actions very well and few people trust such systems to make important judgments. Still, Straub foresees an escalation in AI-enhanced attacks, including customized hacks that are easier and faster to execute. "AI systems could even be used to pull information together from multiple sources to identify people who would be particularly vulnerable to attack," Straub says. Another anticipated advantage for AI-enabled hackers is faster adaptation to countermeasures from human responders, which Straub thinks could lead to "a programming and technological arms race." In addition, Straub says autonomous operation also creates the danger of AI systems attacking systems they should not, or causing unexpected harm.

Full Article
Here's to the Women of Eniac for Giving Us Modern Programming Tools
 
ACM Conferences
 

Association for Computing Machinery

2 Penn Plaza, Suite 701
New York, NY 10121-0701
1-800-342-6626
(U.S./Canada)



ACM Media Sales

If you are interested in advertising in ACM TechNews or other ACM publications, please contact ACM Media Sales or (212) 626-0686, or visit ACM Media for more information.

To submit feedback about ACM TechNews, contact: [email protected]