Welcome to the May 19, 2017 edition of ACM TechNews, providing timely information for IT professionals three times a week.

ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
World's Thinnest Hologram Paves Path to New 3D World
RMIT News
Gosia Kaszubska
May 18, 2017


A team of Australian and Chinese researchers led by the Royal Melbourne Institute of Technology (RMIT) in Australia have cleared a path for three-dimensional holography to be integrated within everyday electronics by inventing a nano-hologram. RMIT professor Min Gu says the hologram can be perceived without special goggles and is manufactured using a simple, rapid laser-writing system. In collaboration with the Beijing Institute of Technology in China, the RMIT team created a 25-nanometer hologram based on a topological insulator material that retains a low refractive index in the surface layer and an ultrahigh refractive index in the bulk. Gu says the material serves as an intrinsic optical resonant cavity, which augments the phase shifts for holographic imaging. RMIT's Zengyi Yue says the next step involves making a rigid thin film to be laid onto a liquid-crystal display screen, and from there creating thin and flexible films for use on a wide range of surfaces.

Full Article

3D model of a city Machine Learning Algorithms Re-Create City in 3D Using Only Image Data
Digital Trends
Dyllan Furness
May 18, 2017


Researchers at the Swiss Federal Institute of Technology in Zurich (ETH Zurich) have developed Varcity, a platform that collects massive amounts of image data of a city and uses algorithms to automatically create a three-dimensional (3D) model of it. The researchers used the Varcity platform to reconstruct the city of Zurich in 3D using millions of images and videos. "The more data we have of an area, the more precise our models get," says ETH Zurich's Hayko Riemenschneider. The researchers used machine-learning algorithms that semi-automatically analyze the images and created a rough sketch of Zurich, and then enhanced the model with their own knowledge of the city. The researchers say the Varcity platform could be used to help design smarter and more livable cities. "There are a number of applications such as urban city planning, architectural design, traffic modeling, autonomous navigation, and tourist guidance, as well as catastrophe response planning," Riemenschneider says.

Full Article
MIT Grad Earns ACM Doctoral Dissertation Award
HPC Wire
May 18, 2017


University of Illinois at Urbana-Champaign professor and Massachusetts Institute of Technology graduate Haitham Hassanieh has received the ACM 2016 Doctoral Dissertation Award for presenting a new method for boosting the efficiency of algorithms computing the Sparse Fourier Transform (SFT). Hassanieh's doctoral dissertation presents the theoretical platform of the SFT, which is more efficient than the Fast Fourier Transform (FFT) for data with a limited frequency count. The FFT cannot keep pace with the massive expansion of datasets stemming from the growth of big data. Hassanieh also demonstrates in his dissertation how the SFT can build practical systems to address key problems in various applications, including wireless networks, mobile systems, computer graphics, medical imaging, biochemistry, and digital circuits. Hassanieh says the SFT is capable of processing data at a rate that is 10 to 100 times faster than was previously possible, significantly increasing the power of networks and devices.

Full Article
IBM Unveils 17-Qubit Quantum Computer--and a Better Way of Benchmarking It
Ars Technica UK
Chris Lee
May 17, 2017


IBM researchers have introduced a "quantum volume" benchmark encompassing the idea of what a quantum computer is capable of calculating, but not necessarily how rapidly it will conduct a calculation. This concept is based on circuit depth, or the maximum number of operations that can be performed before it is unreasonable to expect the qubit state to be correct, multiplied by the number of qubits to produce a reasonable idea of a quantum computer's computational capability. The IBM researchers have estimated the error rate required to obtain a certain quantum volume, following the theory that many computations can be split into a series of 2-qubit computations. The quantum volume benchmark offers scientists a fast method for estimating technology requirements. Following this principle, IBM's just-announced 17-qubit quantum computer will possess a quantum volume of 35, a small increase on an earlier 5-qubit system, if it has the same gate fidelity.

Full Article

Group of people working at a table Building a Better 'Bot:' Artificial Intelligence Helps Human Groups
Yale University News
Jim Shelton
May 17, 2017


Researchers at Yale University recently conducted a series of experiments using teams of human players and robotic artificial intelligence (AI) players. The researchers found the inclusion of the bots boosted the performance of human groups and individual players. The study involved an online game that required groups of people to coordinate their actions for a collective goal, and the human players interacted with anonymous bots that were programmed with three levels of behavioral randomness. The researchers found the inclusion of bots aided the overall performance of the human players, and proved especially beneficial when tasks became more difficult. The bots accelerated the average time for groups to solve problems by 55.6 percent. The Yale experiment also demonstrated a cascade effect of improved performance by humans in the study, as those participants whose performance improved when working with the bots subsequently influenced other human players to raise their game.

Full Article
Google Reveals a Powerful New AI Chip and Supercomputer
Technology Review
Will Knight
May 17, 2017


Google CEO Sundar Pichai on Wednesday unveiled a new processor called the Cloud Tensor Processing Unit (TPU) and machine-learning supercomputers named Cloud TPU pods, which will be assembled into a massive, Internet-accessible resource for artificial intelligence (AI) researchers. "These TPUs deliver a staggering 128 teraflops, and are built for just the kind of number crunching that drives machine learning today," says Google Cloud chief researcher Fei-Fei Li. A complement of 1,000 Cloud TPU systems will initially be made available to scientists, with Pichai noting, "We are building what we think of as AI-first data centers. Cloud TPUs are optimized for both training and inference. This lays the foundation for significant progress [in AI]." Google says it employs TensorFlow to drive search, speech recognition, translation, and image processing. It also notes one-eighth of one Cloud TPU could train its translation algorithms in a single afternoon, versus a full day for 32 of the best graphics-processing units.

Full Article

Aerial photo of a train track and a field Cinematography on the Fly
MIT News
Larry Hardesty
May 17, 2017


Researchers from the Massachusetts Institute of Technology (MIT) and the Swiss Federal Institute of Technology Zurich in Switzerland are working to improve the performance of autonomous drones in capturing video for movies. The team has developed a system that lets filmmakers specify a shot's framing and generate on-the-fly control signals for camera-outfitted drones that maintain that framing as subjects move. The accuracy of the drone's environmental measurements also ensures no collisions with stationary or moving obstacles. Former MIT researcher Javier Alonso-Mora says the system constantly calculates the velocities of all the moving objects in the drone's surroundings and projects their positions a few seconds into the future so optimal flight trajectories can be computed with time to spare. Alonso-Mora also notes the location projections are updated about 50 times each second to manage sudden velocity shifts.

Full Article
This Spy App Can See If You've Visited Whistleblowing Sites on the Dark Web
Motherboard
Jordan Pearson
May 18, 2017


Researchers at the Worcester Polytechnic Institute (WPI) have developed spyware that determines if online users have visited whistleblowing sites on the Dark Web. The app tracks and analyzes usage patterns on a computer's processor, and the WPI team found this process can be executed with malware running in the background on a person's machine. The researchers employed Linux to access the required data, first tracking processor use with the app while browsing different sites in Chrome in incognito mode, and then in the Tor Browser. The data was parsed by an artificial intelligence program, yielding a baseline for predicting which sites a user accessed. Once it was trained, the program could analyze new hardware use patterns and anticipate with 86.3-percent accuracy whether a user had visited Netflix or Amazon via Chrome in Incognito mode. The algorithm inferred whistleblowing site visits via Tor with 84-percent accuracy.

Full Article

Photo of sheet music Imec Demonstrates Self-Learning Neuromorphic Chip That Composes Music
Imec
May 16, 2017


Researchers at Imec in Belgium say they have created the world's first self-learning neuromorphic chip, based on OxRAM technology, that demonstrates the ability to compose music. The researchers integrated state-of-the-art hardware and software to design chips that feature traits similar to the human brain, such as huge computing power and low power consumption. The team says the ultimate goal is to design the process technology and building blocks to make artificial intelligence systems that are sufficiently energy efficient to be combined with sensors. They note such a breakthrough would enable machine learning to be present in all sensors, and permit on-field learning capability to further improve the learning. For example, the researchers say the chip has learned the rules of music composition on the fly, and demonstrated the ability to compose original music. "Our chip has evolved from co-optimizing logic, memory, algorithms, and system in a holistic way," says Imec's Praveen Raghavan.

Full Article
3D-Printed Soft Four-Legged Robot Can Walk on Sand and Stone
UCSD News (CA)
Ioana Patringenaru
May 16, 2017


Researchers at the University of California, San Diego (UCSD) have developed a three-dimensionally-printed, four-legged soft robot that can climb over obstacles and walk on different terrains and rough surfaces, such as sand and pebbles. The team says the robot could be used to capture sensor readings in dangerous environments or for search-and-rescue missions. The researchers used a high-end printer to create soft and rigid materials together within the same components, which enabled them to design more complex shapes for the robot's legs. UCSD professor Michael Tolley says bringing together soft and rigid materials will help create a new generation of fast, agile robots that are more adaptable than their predecessors and can safely work alongside humans. The researchers successfully tested the robot on large rocks, inclined surfaces, and sand. In addition, the researchers also note the robot was able to transition from walking to crawling within an increasingly confined space.

Full Article

Microscopic image of paramecium Microrobots Inspired by Nature
Daegu Gyeongbuk Institute of Science and Technology
May 16, 2017


Researchers at the Daegu Gyeongbuk Institute of Science and Technology (DGIST) in South Korea have developed microrobots that mimic the rowing action of the cilia present in the single-celled paramecium. The DGIST team's microrobot is 220 micrometers long and 60 micrometers high, and it is equipped with eight 75-micrometer-long cilia on each side of its body. The researchers built the microrobots up from a glass substrate using a three-dimensional laser lithography system, and later partially coated them with nickel and titanium deposits. The machines were remotely triggered to move and orient with magnetic fields from eight electromagnetic coils. The researchers overcame the "scallop theorem," which says if any movement forward is mirrored backwards, the object will remain in its original position, by applying a different magnetic field to the cilia during the recovery phase, changing their orientation relative to the power stroke and enabling the microrobot to efficiently move forward.

Full Article
CMU Researchers Roll Out Machine-Learning App to Manage App Privacy Settings
Campus Technology
Sri Ravipati
May 15, 2017


Privacy Assistant, a mobile application developed by researchers at Carnegie Mellon University (CMU), is designed to help Android users configure the many privacy settings necessary to take control of their personal data. The researchers say the project expands the limits of CMU's "IoT expedition," a collaboration with Google and several other higher-education institutions that aims to create new technology for the Internet of Things. Privacy Assistant employs machine learning to assume control of the information that apps on Android devices routinely collect. The system operates by asking users a few questions about their privacy preferences, and then recommending specific permission settings to best meet those needs. Privacy Assistant was developed as part of the Personalized Privacy Assistant Project, whose goal is to enable resource owners to present their resources and share their associated privacy practices. The researchers also are developing privacy assistants that are user-friendly and use machine-learning techniques.

Full Article
Vint Cerf on His 'Love Affair' With Tech and What's Coming Next
Computerworld
Sharon Gaudin
May 19, 2017


In an interview, Google chief Internet evangelist and former ACM president Vint Cerf, who shared the 2004 ACM A.M. Turing Award with Robert E. Kahn, says he envisions the Internet of Things as a significant technology development, although one beset with privacy, safety, and security issues. Artificial intelligence and machine learning also leading technologies, but Cerf worries "about turning over too much autonomous authority to a piece of software." Cerf also is interested in quantum computers and the nanometer shrinkage of microprocessors, and he cites the emergence of a 150-layer neural network and its superior performance to humans as a surprise. Cerf says buggy software is the biggest current issue for information technology, and he sees a need for tools that help coders avoid generating bugs. Cerf also doubts the bug problem will be addressed within the next decade, and he is concerned about a "digital dark age" in which people cannot access information if they fail to preserve older software.

Full Article
Prudential Financial
 
ACM Publications
 

Association for Computing Machinery

2 Penn Plaza, Suite 701
New York, NY 10121-0701
1-800-342-6626
(U.S./Canada)



ACM Media Sales

If you are interested in advertising in ACM TechNews or other ACM publications, please contact ACM Media Sales or (212) 626-0686, or visit ACM Media for more information.

To submit feedback about ACM TechNews, contact: [email protected]