Association for Computing Machinery
Welcome to the November 21, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Microsoft Spends Big to Build a Computer Out of Science Fiction
The New York Times (11/20/16) John Markoff

Microsoft is dedicating significant funding and manpower to its efforts related to quantum computing. Microsoft's decision to move on from pure research to a legitimate effort to build a working prototype highlights a global competition among technology companies to develop the world's first quantum computing system. However, Microsoft has chosen a different path than its competitors in the quest for quantum computing technologies. Microsoft's approach is based on "braiding" anyons, which are particles that exist in just two dimensions, to form the building blocks of a supercomputer based on subatomic particles. However, researchers acknowledge barriers still remain to building practical quantum machines, including those at the level of basic physics and in developing new kinds of software to exploit the qualities of quantum bits (qubits). Microsoft researcher Todd Holmdahl says the company is close to designing the basic qubit building block and it is ready to begin building a complete computer. "Once we get the first qubit figured out, we have a road map that allows us to go to thousands of qubits in a rather straightforward way," Holmdahl says. The Microsoft approach, known as topological quantum computing, follows scientific advances made in the last two years that give its scientists confidence the company will be able to create more stable qubits.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Researchers Found Mathematical Structure That Was Thought Not to Exist
Aalto University (11/14/16)

An international research group has found the largest possible structure described by a theory according to which codes could be presented at a level one step higher than the sequences formed by zeros and ones: mathematical subspaces named q-analogs. The researchers say the mathematical breakthrough could lead to more efficient data transmission. A group of mathematicians in the 1970s started developing the theory, but no applications for it were searched for until about 10 years ago. There were numerous attempts at the best possible codes described in the theory, but they were never found and therefore believed to not even exist. Patric Ostergard at Aalto University in Finland says the search was challenging due to the enormous size of the structures, and required a gigantic operation, even when using very-high-level computational capacity. "Its basic idea is, actually, to try to take advantage of the power of the transmitter as effectively as possible, which in practice means attempting to transmit data using as little energy as possible," Ostergard says. He notes the discovery may gradually become part of the Internet.


Lighting the Way for Quantum Computing
Tyndall National Institute (Ireland) (11/14/16)

Researchers at the Tyndall National Institute in Ireland are working to create scalable, electrically driven photon sources to drive quantum technologies. Quantum computing will lead to much faster and more powerful computer processing, but the technology needed to support the technology is difficult to develop at scale. Tyndall researchers have engineered quantum dot light-emitting diodes (LEDs) that produce entangled photons and could be used to encode information in quantum computing. The new method leverages nanotechnology to electrify arrays of pyramid-shaped quantum dots so they create the entangled photons. "The new development here is that we have engineered a scalable array of electrically driven quantum dots using easily-sourced materials and conventional semiconductor fabrication technologies, and our method allows you to direct the position of these sources of entangled photons," says Tyndall researcher Emanuele Pelucchi. The researchers say the important breakthrough accelerates progress toward the realization of integrated quantum photonic circuits designed for quantum information processing tasks.


Big Data Can Reveal Inaccurate Stereotypes on Twitter, According to UPenn Study
TechRepublic (11/16/16) Hope Reese

Researchers from the University of Pennsylvania (UPenn), the Technical University of Darmstadt in Germany, and the University of Melbourne in Australia recently examined how stereotypes influence what people think about someone based on their tweets. In a series of four studies, 3,000 participants guessed the gender, age, education, and politics of 6,000 tweeters by examining 20 publicly available tweets. The tweets were stripped of images or any other markers that might indicate demographics. The researchers asked each participant to look at about a dozen tweets, and make a judgment about the tweeter using one of the four variables. The researchers used natural language processing to separate out the stereotypes, with the goal of determining how stereotyping impacted judgments. The researchers found participants were correct in their judgments, on average, 68 percent of the time. In terms of gender, 76 percent of guesses were correct, while the guesses for age, political orientation, and education were correct 69 percent of the time, 82 percent of the time, and 45.5 percent of the time, respectively. UPenn researcher Daniel Preotiuc-Pietro notes although participants were mostly right, their stereotypes were exaggerated.


Liquid Silicon: Multi-Duty Computer Chips Could Bridge the Gap Between Computation and Storage
University of Wisconsin-Madison (11/14/16) Sam Million-Weaver

University of Wisconsin-Madison (UW-Madison) researchers are using funding from the U.S. Defense Advanced Research Projects Agency to develop future computers that are much more efficient and powerful. The researchers want to create fully morphable computer chips, called "Liquid Silicon," which can be configured to perform complex calculations, store massive amounts of information within the same integrated unit, and perform efficient communication across units. "We want to target a lot of very interesting and data-intensive applications, including facial or voice recognition, natural language processing, and graph analytics," says UW-Madison professor Jing Li. The chips will be computationally powerful, while being able to store significant amounts of data. These two tasks require entirely different types of hardware in modern computers, which makes conventional machines less efficient. "We're building a unified hardware that can bridge the gap between computation and storage," Li says. The Liquid Silicon chips incorporate memory, computation, and communication into the same device using silicon complementary metal–oxide semiconductor circuitry on the bottom connected with solid-state memory arrays on the top using dense metal-to-metal links. The researchers also are developing software that translates popular programming languages into the chip's machine code. The compilation software enables programmers to port their applications directly onto the new type of hardware without changing their coding habits.


Research Chip Modeled After the Brain Aims to Bring Smarts to Computers
IDG News Service (11/17/16) Agam Shah

University of Tennessee, Knoxville researchers have developed a neuromorphic chip designed for intelligent computers that are structured to discover patterns through probabilities and association, helping with decision making. The researchers used off-the-shelf, reprogrammable circuits called field programmable gate arrays (FPGAs) to simulate the way neurons and synapses in a brain operate. FPGAs excel at performing tasks and can be easily reprogrammed for other applications. "We believe our architectures are particularly amenable for supercomputing applications because of their programmability," the researchers say. They also are studying how to swap out FPGAs with memristors, which can retain data and are considered a replacement for dynamic random-access memory. However, there are disadvantages to using FPGAs for a practical brain model. Reprogramming FPGAs requires bringing them offline, which can disrupt the execution of tasks. In addition, FPGA's cannot be primary chips that boot systems, as they are mostly being used as co-processors that can be power hungry. The researchers believe the chip architecture is more important than the type of chip, and more chip prototypes based on the architecture will be made available to other researchers.


Google Translate: 'This Landmark Update Is Our Biggest Single Leap in 10 Years'
ZDNet (11/17/16) Liam Tung

Google says it is has vastly improved the accuracy of Google Translate through its new Neural Machine Translation (NMT) system. NMT utilizes neural networks to train machines how to produce more natural, grammatically correct translations. The new system improves Google Translate's capacity for contextual translation by processing whole sentences or paragraphs at a time, rather than analyzing individual words. Translation errors have been cut by 55 percent to 85 percent in several languages, but the system still makes mistakes, such as dropping words or misinterpreting a person's name. Google announced their attempts to replicate human translations using neural networks in September, at which time NMT only supported translations between English and Chinese. NMT has since been rolled out for eight language pairs on the Google Translate website, mobile application, and Google Search. Translations to and from English and French, German, Spanish, Portuguese, Chinese, Japanese, Korean and Turkish are enabled, covering 35 percent of Google Translate queries. Eventually, NMT will support translations for 103 languages. "With this update, Google Translate is improving more in a single leap than we've seen in the past 10 years combined," says Google Translate's Barak Turovsky.


Tiny Electronic Device Can Monitor Heart, Recognize Speech
CU Boulder Today (11/16/16)

Researchers at the University of Colorado Boulder (CU Boulder), Northwestern University, and Eulji University College of Medicine in Korea have developed a wearable sensor that can monitor human heart health as well as recognize spoken words. A sticky, flexible polymer encapsulates the device, which contains a tiny commercial accelerometer to measure the vibration of the body acoustics and allows for the evaporation of human sweat. The sensor resembles a small adhesive bandage, weighs less than one-hundredth of an ounce, and can continuously collect physiological data. The team used the device to measure cardiac acoustic responses and electrocardiogram activity in a group of elderly volunteers at a private medical clinic in Arizona. The sensor could be used in remote, noisy places--including battlefields--to produce high-quality cardiology or speech signals that can be read in real time at distant medical facilities. Vocal cord vibrations gathered when the device is on the throat also can be used to control video games and other machines. "This device has a very low mass density and can be used for cardiovascular monitoring, speech recognition, and human-machine interfaces in daily life," says CU Boulder's Jae-Woong Jeong.


An Efficient Approach for Tracking Physical Activity With Wearable Health-Monitoring Devices
NCSU News (11/16/16) Matt Shipman

A new technique developed by researchers at North Carolina State University (NCSU) could enable wearable health devices to track user physical activity accurately and efficiently. Wearable devices have limited power, but their programs need to know how much data to process when assessing activity and storing that information. The team set out to find a data signature formula that would enable programs to identify different physical activity. The researchers had graduate students play golf, bike, walk, wave, and sit in a motion-capture lab, and then evaluated the resulting data using taus of zero seconds (i.e., one data point), two seconds, four seconds, and so on, all the way up to 40 seconds. The team then experimented with different parameters for classifying activity data into specific profiles. The researchers say they were able to accurately identify the five relevant activities using a tau of six seconds. "This means we could identify activities and store related data efficiently," says NCSU's Edgar Lobaton. The team is confident their approach will provide the best opportunity to track and record physical activity data in a practical way.


Stanford Researchers Send Messages Using Household Chemicals
Stanford News (11/15/16) Taylor Kubota

When Stanford University researcher Nariman Farsad was completing his master's degree at York University in Canada, he built the first ever experimental chemical texting system, which used vodka to send text messages. Now, working in Stanford's Wireless Systems Lab, he has developed a faster version of the system that communicates through pulses of glass cleaner and vinegar. Although this type of chemical communication system relies on a binary code to relay messages, instead of zeros and ones, the system sends pulses of acid or base. The researchers type a message in a small computer, which then sends a signal to a machine that pumps out the corresponding "bits" of chemicals that travel through plastic tubes to a small container with a pH sensor. Changes in pH are then transmitted to a computer that deciphers the encoded message. One of the major challenges is finding a way to separate the signal from the noise at the end of the transmission. Changing the system to use the acid-base combination rather than vodka was a huge improvement but the chemicals still leave residue behind as they move through the channel. The researchers currently are studying how chemical communication could advance nanotechnology. "It could enable the emergence of these tiny devices that are working together, talking together, and doing useful things," Farsad says.


CertiKOS: A Breakthrough Toward Hacker-Resistant Operating Systems
Yale University News (11/14/16) William Weir

Researchers at Yale University have developed CertiKOS, a new operating system (OS) they say could lead to a new generation of reliable and secure systems. The researchers say CertiKOS is the first OS to run on multi-core processors and protects against cyberattacks. CertiKOS incorporates formal verification to ensure a program performs precisely as its designers intended, a safeguard that could shield home appliances, Internet of Things devices, self-driving cars, and digital currency from hacking. CertiKOS supports concurrency, which sets it apart from other previously verified systems and enables it to run on the latest multi-core machines. The architecture also is designed to take on new functionalities and be used for different application domains. "CertiKOS demonstrates that it is feasible and practical to build verified software that additionally provides evidence--through machine-checkable mathematical proof--that it is functionally correct," says Anindya Banerjee, program director at the U.S. National Science Foundation, which funded the effort. The CertiKOS verified operating system kernel is a critical element in the U.S. Defense Advanced Research Projects Agency's High Assurance cyber Military Systems program, which is used to build cyber-physical systems that are demonstrably free from cyber vulnerabilities.


Researchers Create Tool to See Network Traffic, Stop Cyberattacks
CMU News (11/14/16) Daniel Tkacik

Carnegie Mellon University (CMU) researchers are developing a visual tool to help defend against the kind of massive distributed denial-of-service (DDoS) attack that recently disabled several major websites, including Amazon and Netflix. "Visualization is one way to change abstract data into pictures, sound, and videos so you can see patterns in a very intuitive way," says Yang Cai, director of the CMU CyLab Security and Privacy Institute's Visual Intelligence Studio. The new tool enables users to visualize network traffic to more easily identify key changes and patterns. The researchers used the tool to inspect network traffic during DDoS attacks and map out the structure of malware distribution networks. "Based on these visualization graphs, analysts can focus on critical areas to help shut down a malware distribution network, or in the case of a DDoS attack, target a critical node to thwart the attack," says CyLab researcher Sebastian Peryt. The researchers next want to examine human factors that could make the tool more usable, operate more efficiently, and to integrate it into a virtual reality platform so analysts can more easily explore the graphs with intuitive motions.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]