Association for Computing Machinery
Welcome to the November 14, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).


CSIRO's New Supercomputer Will Be Something to Bragg About
ZDNet (11/14/16) Asha McLean

Australia's Commonwealth Scientific and Industrial Research Organization (CSIRO) aims to replace its Bragg accelerator cluster with a supercomputer capable of petaflop speeds, according to CSIRO's Angus Macoustra. Among the projects Macoustra says the system is expected to support are materials research, climate and molecular modeling, artificial intelligence, computational fluid dynamics, and data analytics using deep learning. CSIRO wants a heterogeneous machine integrating traditional central-processing units (CPUs) and coprocessors to enhance both performance and energy efficiency. The group expects the cluster to reside within five racks of 48U compute nodes, with each node featuring at least two CPUs of either a Power or Intel x86 64-bit architecture, with at least four cores. Each node will have a minimum of two NVIDIA Pascal graphical-processing units, and the nodes will link to an existing FDR Infiniband interconnect. The Bragg system was ranked 156th on the Top500 supercomputer list when it premiered in 2012. Macoustra says Bragg's successor will be housed at CSIRO's Canberra data center, where its other high-performance computing systems reside. "All of these models use a lot of data, so we have quite a significant data storage cloud also in the data center," he notes.

Manchester Researchers a Step Closer to Developing Quantum Computing
University of Manchester (11/14/16)

Researchers from the U.K.'s University of Manchester have revealed proof that large molecules of nickel and chromium could store and process information in the same way bytes do for digital computers, which they say is a step toward atomic-scale computing. They demonstrate using supramolecular chemistry to link quantum bits (qubits) would produce several varieties of stable qubits that could be assembled into structures called "two-qubit gates." The algorithms designed by Manchester professor Richard Winpenny and colleagues integrate large molecules to generate two qubits and a quantum gate between them, with supramolecular chemistry binding the gates together. An analysis shows the quantum information stored in the individual qubits is retained long enough to enable manipulation of the information and algorithms. "The real problem seems to be whether we could put these qubits together at all," Winpenny says. "But we showed that connecting these individual qubits doesn't change the coherence times, so that part of the problem is solvable. If it's achievable to create multi-qubit gates we're hoping it inspires more scientists to move in that direction."

New AI-Based Search Engines Are a 'Game Changer' for Science Research
Scientific American (11/12/16) Nicola Jones

Artificial intelligence (AI)-based academic search engines such as Semantic Scholar and Microsoft Academic could transform scientific research and inquiry, according to proponents. Semantic Scholar from the Allen Institute for Artificial Intelligence (AI2) is designed to sort and rank academic papers with more refined content and contextual understanding than keyword-reliant search engines. Stanford University neurobiologist Andrew Huberman calls Semantic Scholar a "game changer," noting "it leads you through what is otherwise a pretty dense jungle of information." The AI2 search engine's creators say they are growing its database to encompass about 10 million research articles, mostly on computer science and neuroscience. Meanwhile, Microsoft Academic was released in May as a replacement for Microsoft Academic Search, and Microsoft Research's Kuansan Wang contrasts the tool with Semantic Scholar in several respects. He notes Semantic Scholar is more deeply invested in natural-language processing to drive searches, while Microsoft Academic, powered by Bing's semantic search capabilities, covers far more publications--160 million. Wang says the tool's recursive algorithm evaluates the most influential scientists in each sub-discipline according to whether their papers are cited by other important papers. Microsoft Research says the development of a personalizable version of Microsoft Academic also is underway.

Online Password Guessing Threat Underestimated
Lancaster University (11/07/16)

Researchers from Lancaster University in the U.K., and Peking University and Fujian Normal University in China, have created different guessing frameworks that prioritize the order of guessing based on attackers having access to different types of personal information. The research aims to analyze the vulnerability of online passwords to targeted guessing. The prioritizing models were tested against 10 large real-world datasets from Chinese and English Internet users. The researchers found the attack models that benefited from multiple pieces of personal information were able to successfully guess the passwords of accounts for more than 73 percent of normal users, and about a third of security-savvy users with a limit of 100 guesses. "Our results suggest that the currently used security mechanisms would be largely ineffective against the targeted online guessing threat, and this threat has already become much more damaging than expected," says Lancaster University researcher Jeff Yan. He says the research indicates targeted password guessing is an underestimated threat, as a large number of passwords can be guessed if personal information is known to the attacker. The research was presented last month at the ACM Conference of Communication and Systems Security (CCS 2016) in Vienna, Austria.

Artificial-Intelligence System Surfs Web to Improve Its Performance
MIT News (11/10/16) Larry Hardesty

Massachusetts Institute of Technology (MIT) researchers say they have developed a new approach to information extraction that inverts conventional machine learning upside down. The researchers trained their system on sparse data because in the scenario they are investigating that is usually all that is available. The team then found the limited information an easy problem to solve. If the new system produces a low confidence score, it automatically generates a Web search query to locate texts likely to contain the data the system is trying to extract. The system attempts to extract the relevant data from one of the new texts and reconciles the results with those of its initial extraction. "So you have something that's a very weak extractor, and you just find data that fits it automatically from the Web," says MIT graduate student Adam Yala. The researchers compared the system's performance to that of several conventional extractors, and found for each data item extracted, the new system outperformed its predecessors by about 10 percent. The researchers "have this super-clever part of the model that goes out and queries for more information that might result in something that's simpler for it to process," says University of Pennsylvania professor Chris Callison-Burch.

Google DeepMind's AI Learns to Play With Physical Objects
New Scientist (11/10/16) Timothy Revell

Researchers at Google's DeepMind unit in the U.K. and the University of California, Berkeley used deep reinforcement learning to train an artificial intelligence (AI) system to learn about the physical properties of objects by interacting with them in two different virtual environments. In one experiment, the AI was faced with five blocks that were the same size but had a randomly assigned mass that changed each time the experiment was run. The AI was rewarded if it correctly identified the heaviest block, but given negative feedback if it was wrong. Through several repetitions of the experiment, the AI learned the only way to determine the heaviest block was to interact with all of them before making a choice. The second experiment involved up to five blocks arranged in a tower. Some of the blocks were combined to make one larger block, while others were not. The AI had to determine how many distinct blocks there were, again receiving positive or negative feedback depending on the answer. Over time, the AI learned it had to interact with the tower to determine the correct answer. "Reinforcement learning allows solving tasks without specific instructions, similar to how animals or humans are able to solve problems," says Eleni Vasilaki, a researcher at the University of Sheffield in the U.K.

Carnegie Mellon Researchers Visualize Way to Fend Off DDoS Attacks
Network World (11/08/16) Bob Brown

Carnegie Mellon University's CyLab Security and Privacy Institute is touting research that shows the tools needed to thwart cyberattacks, such as the massive distributed denial-of-service (DDoS) attack that targeted DNS provider Dyn, are being developed. The key is providing visualizations of the massive amounts of network traffic data that IT and security analysts normally examine, according to CyLab researcher Yang Cai, which he says makes it easier to identify patterns in the data. "Visualization is one way to change abstract data into pictures, sound, and videos so you can see patterns in a very intuitive way," Cai says. For example, the CMU researchers have developed a tool that can be used to inspect network traffic during DDoS attacks and help shut down a malware distribution network. However, the visualization of so much data on a computer screen can be quite overwhelming, so the researchers also are working on a way to present the data in a virtual reality form.

Real or Not? USC Study Finds Many Political Tweets Come From Fake Accounts
USC News (11/08/16) Ian Chaffee

University of Southern California (USC) researchers have found bots made up nearly 20 percent of the political conversation on Twitter during the campaign season. "We need to guarantee that this platform is reliable and that it does not compromise the democratic political process by fostering the spread of rumors or misinformation," says USC professor Emilio Ferrara. The researchers analyzed 20 million election-related tweets in three periods between Sept. 16 and Oct. 21, 2016, by querying the Twitter Search application programming interface. The researchers ran "political tweets" through the "Bot or Not" algorithm, and found Twitter bot accounts produced 19 percent of all election tweets during the study's time frame. In addition, social bots accounted for 400,000 of the 2.8 million individual users tweeting about the election, or nearly 15 percent of the population being studied. The researchers also examined the expressions of positivity and negativity in political discourse, generated by both bot and human tweets. The researchers found President-Elect Donald Trump had a significantly higher number of bot supporters, and Ferrara says Twitter could serve as a bellwether for how disinformation and misrepresentation might be generated automatically across other online and social platforms. "Other social networks like Facebook are finding it challenging to validate information sources as well," he notes.

Researchers Want to Use Hardware to Fight Computer Viruses
Binghamton University (11/07/16)

Binghamton University researchers are studying how hardware can help protect computer systems from viruses. "The impact will potentially be felt in all computing domains, from mobile to clouds," says Binghamton professor Dmitry Ponomarev. The research represents a new approach to improve the effectiveness of malware detection and to enable systems to be protected continuously without requiring the large resource investment needed by software monitors. The Binghamton researchers want to modify a computer's central-processing unit (CPU) chip by adding logic to check for anomalies while running a program such as Microsoft Word. If an anomaly is identified, the hardware will alert more robust software programs to examine the problem. The researchers say although the hardware will not be right about suspicious activity 100 percent of the time, it will be an effective first line of defense that will improve the overall efficiency of malware detection. "The modified microprocessor will have the ability to detect malware as programs execute by analyzing the execution statistics over a window of execution," Ponomarev says. He notes the modified CPU will use low-complexity machine learning to classify malware from normal programs.

Accelerating Cancer Research With Deep Learning
Oak Ridge National Laboratory (11/08/16) Jonathan Hines

Oak Ridge National Laboratory (ORNL) researchers are applying deep-learning techniques to automate how information is collected from cancer pathology reports documented across a nationwide network of cancer registry programs. Georgia Tourassi, director of the Health Data Sciences Institute at ORNL, led a team focused on software that can identify valuable information in cancer reports faster than manual methods. The machine-learning technique leverages algorithms, big data, and the processing power of the Titan supercomputer at the Oak Ridge Leadership Computing Facility. Using a dataset of nearly 2,000 pathology reports, researchers trained a deep-learning algorithm to simultaneously carry out two closely related tasks. First the algorithm scanned each report to identify the location of the cancer, and then identified the side of the body on which the cancer was located. Another study used more than 900 reports on breast and lung cancer to test the system's ability to match the cancer's origin to its corresponding classification, using a convolutional neural network and text from general, medical, and highly specialized sources. The algorithm created a mathematical model that drew connections between words shared between unrelated texts. The researchers say the continued development of automated data tools will give scientists and policymakers a highly detailed view of the U.S. cancer population.

Now You See It, Now You Don't
UNews (UT) (11/09/16) Vincent Horiuchi

University of Utah researchers say they have developed a cloaking device for microscopic photonic integrated devices that will enable photonic computer chips to be smaller and more efficient. The researchers discovered that a special nanopatterned silicon-based barrier placed between two photonic devices fools each device into thinking there is nothing on the other side. "Any light that comes to one device is redirected back as if to mimic the situation of not having a neighboring device," says Utah professor Rajesh Menon. "It's like a barrier--it pushes the light back into the original device." Menon says billions of these photonic devices can be packed into a chip while still using 10 to 100 times less power than current silicon-based chips. The most immediate application for the technology likely will be for data centers, which make up 1.8 percent of total U.S. electricity consumption. "By going from electronics to photonics we can make computers much more efficient and ultimately make a big impact on carbon emissions and energy usage for all kinds of things," Menon says. "It's a big impact and a lot of people are trying to solve it."

RoboVote Helps Groups Make Decisions Using AI-Driven Methods
CMU News (11/07/16) Byron Spice

Researchers at Carnegie Mellon (CMU) and Harvard universities have launched, an online service that enables anyone to use state-of-the-art voting methods to make optimal group decisions. RoboVote is driven by artificial intelligence and draws on social choice research on how opinions, preferences, and interests can best be combined to reach a collective decision. "We have taken what years of research have proven to be the best algorithms for making collective decisions and made them available with an interface that anyone can use," says CMU professor Ariel Procaccia. RoboVote is designed to handle subjective surveys, in which there is no correct outcome, as well as objective polls, in which the process is designed to produce an answer as close to the truth as possible. In computational social choice, "we can build systems like RoboVote and implement the rules we think are best," Procaccia says. RoboVote is similar to, a website Procaccia launched two years ago to implement "provably fair" solutions to everyday problems. Procaccia says both sites use processes that are proven and well known to researchers, but not readily accessible to most people.

Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.

To submit feedback about ACM TechNews, contact: [email protected]