Association for Computing Machinery
Welcome to the January 4, 2017 edition of ACM TechNews, providing timely information for IT professionals three times a week.

ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


The Next Supercomputing Superpower--Chinese Technology Comes of Age
Asian Scientist (01/03/17) Rebecca Tan

China has been the ranking leader on the Top500 list of the world's most powerful supercomputers since June 2013, claiming unsurpassed growth compared to all other countries, according to University of Tennessee professor Jack Dongarra. The ascent of China, which did not even make the Top500 list until 2001, raised fears of its supercomputers being used for nuclear applications, given the growing need for such resources to simulate nuclear tests. Despite a U.S. ban on selling microchips to China, Stony Brook University professor Deng Yuefan says China's supercomputing progress has continued unabated. One result was the rollout of China's Shenwei SW26010 chips, which put the Sunway TaihuLight system at the top of the Top500 list with a Linpack benchmark of 93 petaflops and also tripled its predecessor's efficiency. Deng says China is making investments in software development to put its supercomputers to good use. He says this is evident in the use of Sunway TaihuLight by three of the six finalists for the 2016 ACM Gordon Bell Prize, including the winning team, at the SC16 conference in November. Meanwhile, China also is in a race with Japan and the U.S. to build the first exascale supercomputers.


Eco-Driving and Safe Driving Technology to Save Lives, Environment, and Money
Queensland University of Technology (01/03/17) Sandra Hutchinson

Researchers at the Queensland University of Technology (QUT) in Australia are developing in-vehicle technology to improve safety and save money. They have designed an in-car device that aims to persuade drivers to adopt a fuel-efficient and safe driving style. "By using technology we believe we can encourage people to be eco-friendly as well as safe behind the wheel," says QUT researcher Atiyeh Vaezipour. The prototype interface to encourage eco-driving will be tested by Brisbane drivers in the Center for Accident Research & Road Safety-Queensland (CARRS-Q) simulator. The testing phase will involve volunteer licensed drivers getting behind the wheel of the CARRS-Q simulator and completing several simulated driving tasks lasting 10 to 12 minutes each. "During the different driving scenarios, drivers will be asked to use the device which provides real-time individual advice and feedback to improve safety and reduce fuel consumption," Vaezipour says. They will then be asked to complete a survey to evaluate the effectiveness and driver acceptance of the system. The CARRS-Q prototype, which provides real-time data to the driver, consists of a liquid-crystal display screen fitted within a three-dimensionally-printed casing.


NTU and German Scientists Turn Memory Chips Into Processors to Speed Up Computing Tasks
Nanyang Technological University (Singapore) (01/03/17) Lester Kok

Researchers from Nanyang Technological University (NTU) in Singapore and RWTH Aachen University and Forschungszentrum Juelich in Germany have developed a new computing circuit that enables data to be processed in the same place where it is stored. The technology relies on Redox-based resistive switching random-access memory (ReRAM) chips. The researchers demonstrated how ReRAM can be used to process data, instead of just storing it. The researchers say conventional devices and computers have to transfer data from memory storage to the processor unit for computation, but the new NTU circuit saves time and energy by eliminating these data transfers. In addition, the circuit can double the speed of current processors found in laptops and mobile devices. The prototype ReRAM circuit processes data in four states instead of two. Since ReRAM uses different electrical resistance to store information, it could store the data in an even higher number of states, speeding up computing tasks beyond current limitations. Using this technology "not only for data storage but also for computation could open a completely new route towards an effective use of energy in the information technology," says RWTH professor Rainer Waser.


Can Paint Strokes Help Identify Alzheimer's?
Maynooth University (12/29/16)

Researchers at Maynooth University in Ireland and the University of Liverpool in the U.K. found it may be possible to use fractal analysis to detect neurodegenerative disorders in artists before they are diagnosed. The researchers examined 2,092 paintings from the careers of seven famous artists who experienced both normal aging and neurodegenerative disorders. Of the seven artists, Salvador Dali and Norval Morrisseau suffered from Parkinson's disease, James Brooks and Willem De Kooning had Alzheimer's disease, and Marc Chagall, Pablo Picasso, and Claude Monet had no recorded neurodegenerative disorders. The Maynooth researchers analyzed the brush strokes of each of the paintings using non-traditional mathematics to assess fractals. "In much the same way that linguists have been able to determine the changes in the writings of authors and the speeches of politicians, fractal analysis can determine the changes that take place within the pattern of brush strokes of a painting," says Maynooth professor Ronan Reilly. The study showed clear patterns of change in the fractal dimension of the paintings created by artists who suffered neurological deterioration compared to those artists who aged normally. "We hope that our innovation may open up new research directions that will help to diagnose neurological disease in the early stages," says University of Liverpool lecturer Alex Forsythe.


Study: Carpooling Apps Could Reduce Traffic 3x
MIT News (01/03/17) Adam Conner-Simons

Researchers from the Massachusetts Institute of Technology's (MIT) Computer Science and Artificial Intelligence Laboratory say they have developed an algorithm that found 3,000 four-passenger cars could serve 98 percent of taxi demand in New York City, with an average wait-time of just 2.7 minutes. The system relies on ride-sharing, and suggests that using carpooling options could reduce the number of vehicles on the road by 75 percent without significantly impacting travel time. The algorithm uses data from 3 million taxi rides, and works in real time to reroute cars based on incoming requests. The system also can proactively send idle cars to high-demand areas. "The system is particularly suited to autonomous cars, since it can continuously reroute vehicles based on real-time requests," says MIT professor Daniela Rus. The system also can analyze a range of different types of vehicles to determine which will provide the greatest benefit. The algorithm works by creating a graph of all the requests and vehicles, and then creates a second graph of all possible trip combinations, using integer linear programming to compute the best assignment of vehicles to trips. Rus says the system is known as an "anytime optimal algorithm," which improves the more times it is run.


Supercomputing Subatomic Particle Research on Titan
Inside HPC (12/26/16)

A joint project between the Thomas Jefferson National Accelerator Facility and NVIDIA is developing improved quantum chromodynamics codes (QCDs) for graphics-processing units (GPUs) using the Titan supercomputer at the Oak Ridge National Laboratory. The GlueX experiment seeks to gain new insights into the interactions of subatomic particles. "We believe there is a theory that describes how elementary particles interact, quarks, and gluons that make up the matter around us," says the Jefferson Lab's Robert Edwards. "If so, the theory of QCD suggests that there are some exotic forms of matter that exist, and that's what we're looking for." Computing quark-gluon interactions by solving a massive number of Dirac equations is critical to GlueX. The team is looking for new ways to enhance code performance on the Jefferson researchers' CHROMA code, as detailed at the SC16 conference in November. NVIDIA's Kate Clark says GPUs are the "glue" in GlueX thanks to their memory bandwidth. "If you can describe your problem in such a way that GPUs can get maximum use of their memory bandwidth, QCD calculations will go a lot quicker," she notes. "One aspect of GPUs is that they bring a lot of parallelism to the problem, and so to get maximum performance, you may need to restructure your calculation to exploit more parallelism."


Streamlining the Internet of Things and Other Cyber-Physical Systems
Michigan Tech News (12/28/16) Allison Mills

Researchers at Michigan Technological University (Michigan Tech) and other institutions outline a framework to enhance research on cyber-physical systems via streamlined design in an Institute of Electrical and Electronics Engineers keynote paper. "The register-transfer-level (RTL) design flow for digital circuits is one of the major success stories in electronic design automation," they note. "Will a durable design methodology, such as the RTL design flow, emerge for cyber-physical systems?" The researchers say the solution relies on how well cross-disciplinary teams can handle large-scale heterogeneous and dynamic technologies while keeping human users in the equation. Michigan Tech professor Shiyan Hu says cyber-physical systems typically involve a simple, efficient data transfer between sensors and the physical system, but the exchange is a weak connection. The research team says security and privacy are "cross-cutting concerns throughout the design process that must be considered from the very beginning of the design process; they cannot just be bolted on as an afterthought." Improvements to security and privacy demand specialists at each stage of design and fabrication. Hu says big data is the critical element in making safe, reliant, and innovative cyber-physical systems by integrating model-based design and data-based learning.


5 Big Predictions for Artificial Intelligence in 2017
Technology Review (01/04/17) Will Knight

Among the expected advancements in artificial intelligence (AI) for 2017 is progress in applying deep reinforcement learning to real-world challenges such as automated driving and industrial robotics. Algorithmic innovation in reinforcement learning will be driven by the recent release of several simulated environments. Meanwhile, generative adversarial networks are expected to advance the ability of computers to learn from unlabeled data and produce extremely realistic simulations. A third likely trend is an explosion of Chinese innovation in AI and machine learning, with investors funding AI-focused startups and China's government pledging to allocate about $15 billion in AI funding by 2018. Also expected this year is further evolution of AI systems' language-recognition and generation capabilities, built on techniques that have enabled significant progress in voice and image recognition. Some researchers also predict an inevitable backlash this year against the heavy hype of AI technologies in 2016. They say the problem of hype is the unavoidable sense of disappointment when major breakthroughs fail to materialize, leading to the failure of overvalued startups.


Random Access Memory on a Low Energy Diet
Helmholtz-Zentrum Dresden-Rossendorf (01/03/17) Simon Schmitt

Researchers at Helmholtz-Zentrum Dresden-Rossendorf (HZDR) in Germany say they have developed the basis for a new memory chip with the potential to use less energy than conventional chips. The HZDR researchers have produced a purely antiferromagnetic magnetoelectric memory (AF-MERAM). The AF-MERAM prototype is based on a thin layer of chromium oxide, which is inserted between two nanometer-thin electrodes. If a voltage is applied to the electrodes, the chromium oxide "flips" into a different magnetic state, and the bit is written. The researchers found they could reduce the voltage by a factor of 50, which enabled them to write a bit without excessive energy consumption and heating. In order to read out the written bit again, the researchers attached a nanometer-thin platinum layer on top of the chromium oxide, which enables the readout through the Anomalous Hall Effect. "The material is thus far working at room temperature, but only within a narrow window," says HZDR's Tobias Kosub. "We want to considerably expand the range by selectively altering the chromium oxide."


HPE's New Chip Marks a Milestone in Optical Computing
IEEE Spectrum (01/02/17) Rachel Courtland

A research team at Hewlett Packard Labs, now a part of Hewlett Packard Enterprise (HPE), says they have built a chip integrating more than 1,000 optical components that could potentially boost computation speed and save energy. The chip, developed through the U.S. Defense Advanced Research Projects Agency's Mesodynamic Architectures program, is an implementation of the Ising approach in which optimization problems are entered into a machine by tuning the interactions between computation elements, called spins, which are designed to be in one of two states. The spins interact with each other until they settle into an optimal, low-energy configuration. The HPE chip does not need electronic feedback. The chip's four nodes support four spins made of infrared light, and once light exits each node, it is combined with light from each of the other nodes inside the interferometer. Electric heaters alter the optical path length of each light beam and encode the problem to be solved, and the outputs of the interactions between spins are condensed and realigned to one of two phases. The light cycles repeatedly via the interferometer and the nodes until the system equilibrates to a single answer. Ising chips may be able to act as accelerators, but researchers say their potential competitiveness with conventional machines still needs to be explored.


A Digital Portrayal of James Joyce's 'Portrait'
Irish Times (12/29/16)

Computer scientists and literature scholars at University College Dublin in Ireland have teamed up to create a multimedia version of James Joyce's novel "A Portrait of the Artist as a Young Man." They have charted the social networks of Joyce's characters in the novel, providing new insight that puts social inequality and division center stage. The networks within each chapter are distinct, and friendships do not survive in the Dublin presented in the novel. The multimedia edition integrates the results of this research in an accessible way, via animations showing how each character in the chapter relates to the others. Readers are offered a map to find the places mentioned in the novel and how they correspond to the places where Joyce lived as a young man. The dynamic, multimedia approach was developed as part of the Nation, Genre, and Gender project, which uses computational methods to examine 19th- and early 20th-century novels. The digital version was produced to celebrate the centenary of the novel's publication on Dec. 29, 1916.


In the Twinkling of an Eye
Swiss National Science Foundation (12/23/16) Sven Titz

The process of automatically tracking the direction of a person's gaze is making significant progress. Computer scientist Peter Kiefer and geomatics expert Martin Raubal in the GeoGazeLab at ETH Zurich in Switzerland are using eye-tracking technology to refine smartphone maps so pedestrians will find their way in any new environment. Meanwhile, Mandana Sarey Khanie, a civil engineer at the Interdisciplinary Laboratory of Performance-Integrated Design at the Swiss Federal Institute of Technology in Lausanne, is using eye-tracking technology to develop software that can help architects make better use of light when designing workspaces. Agnes Scholz, a psychologist at the University of Zurich in Switzerland, is using the technology to gain a better understanding of the specific viewing behaviors of humans and the role they play in decision-making. Kenneth Funes Mora and Jean-Marc Odobez at the Idiap Research Institute in Switzerland say their new camera, which uses an eye-tracking method, can support the interaction between people and computers. For example, they note a robot could use their system to advise customers in a shopping mall.


Research Innovation Drives an Industry-Leading Computational Geometry Engine in High Speed
Ohio State University (12/01/16)

Ohio State University (OSU) researchers have developed PixelBox, a fast parallel algorithm for massive polygon overlay operations, which is implemented in a hybrid system of both graphics-processing units (GPUs) and multicore processors. The polygon overlay is a complex and time-consuming process to superimpose multiple geographic layers and their attributes to produce a new polygon layer. The researchers note this process has becoming increasingly massive in the big data era, with information being collected from graphical-information systems, electronic design automation, computer vision, image processing, and motion planning solutions for robots. The OSU solution provides a fast and efficient system for daily production tasks of spatial data analytics in many areas. "I am very pleased to see another basic research work of ours directly impacts on production systems, which is a high recognition to the value of our research efforts," says OSU professor Xiaodong Zhang. He notes the PixelBox algorithm lays a scientific foundation for massive polygon overlay operations, providing a 25-fold performance increase over similar industry products.


Abstract News © Copyright 2017 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]