Association for Computing Machinery
Welcome to the January 25, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Self-Stacking Nanogrids Could Lead to Tinier Chips
eWeek (01/24/16) Jeffrey Burt

Massachusetts Institute of Technology (MIT) researchers have developed a new technique for using block copolymers and mesh structures to find new ways to build processors for memory and optical chips, and possibly processors for computers. The new technique involves using block copolymers that can spontaneously self-assemble into specific shapes, a process that could be used to build future chips. "We use the first layer of block copolymer as a template to self-assemble another layer of block copolymer on top of it," says MIT researcher Amir Tavakkoli. The researchers used two polymers, one carbon-based and the other silicon-based; in an attempt to move away from the carbon-based polymers, the silicon-based polymers folded in on themselves, creating cylinders with loops of silicon-based polymer on the inside and carbon-based polymer on the outside. The researchers exposed the cylinder to oxygen plasma, causing the carbon-based polymer to burn away and the silicon to oxidize, leaving glass-like cylinders attached to a base. The researchers created a second layer of cylinders by repeating the process using copolymers with slightly different chain lengths so they would naturally orient themselves perpendicularly to those in the first, creating a mesh structure. The glass-like wires by themselves cannot be used directly for electronic applications, but they could be combined with other molecules that make them electronically active, according to the researchers.


How One Intelligent Machine Learned to Recognize Human Emotions
Technology Review (01/23/16)

Shanghai Jiao Tong University researchers have developed a method to identify the emotional states of a brain and to repeat it reliably using machine learning. The researchers first created a database over a period of weeks by asking 15 students to watch 15 film clips that were each associated with positive, negative, or neutral emotions. During each viewing, the researchers recorded the subject's face as well as the electrical signals from 62 electrodes attached to the subject's head. The subjects then were asked whether the film triggered a positive, negative, or neutral response and to rate their emotional levels on a scale of one to five. The researchers then used a machine-learning algorithm to analyze the dataset, looking for common features in the brain waves of people in the same emotional states. The algorithm found a set of patterns that distinguished positive, negative, and neutral emotions, which worked for different subjects and for the same subjects over time with an accuracy of about 80 percent. "The performance of our emotion-recognition system shows that the neural patterns are relatively stable within and between sessions," the researchers say.


After 2,500 Years, a Chinese Gaming Mystery Is Solved
Motherboard (01/25/16) Leif Johnson

Computer scientist John Tromp discovered the total number of legal positions on Go's standard 19x19 board using servers at the Institute for Advanced Study's School of Natural Sciences, the IDA's Center for Communications Research, and Hewlett-Packard's Helion Cloud. The software Tromp used was originally developed in 2005. By 2007, the researchers were able to compute the number of legal positions on a 17x17 board, which exhausted the hardware resources available at the time. Tromp provides the software he used on his GitHub repository, but it requires a server with 15 terabytes of fast scratch diskspace, eight to 16 cores, and 192 GB of RAM. Although the leap from calculating the legal moves on a 17x17 board to a 19x19 board may seem small, each increase in the board's dimensions demands a fivefold increase in the memory, time, and disk space required, according to Tromp. He plans to continue work on his "Cuckoo Cycle" proof-of-work system and solve large-scale Connect Four problems, and he is especially interested in improving similar work on chess. "Having the ability to determine the state complexity of the greatest abstract board game, and not doing it, that just doesn't sit right with me," Tromp says.


A New Quantum Approach to Big Data
MIT News (01/25/16) David L. Chandler

Researchers at the Massachusetts Institute of Technology (MIT), the University of Waterloo, and the University of Southern California have developed an approach to using quantum computers to handle massive digital datasets, which could potentially accelerate the solving of astronomically complex problems. Algebraic topology is core to the new technique as it can reduce the impact of the unavoidable distortions that arise every time someone collects data about the real world, says MIT professor Seth Lloyd. Topological analysis "represents a crucial way of getting at the significant features of the data, but it's computationally very expensive," Lloyd notes. "This is where quantum mechanics kicks in." He cites as an example a dataset with 300 points, saying tackling this challenge with a conventional approach to analyzing all the topological features in that system would require "a computer the size of the universe," making solving the problem impossible. "That's where our algorithm kicks in,” he says. Lloyd says solving the same problem with the new system, using a quantum computer, requires only 300 quantum bits, and he believes a device this size could be realized in the next few years. "Our algorithm shows that you don't need a big quantum computer to kick some serious topological butt," Lloyd says.


$28M Challenge to Figure Out Why Brains Are So Good at Learning
Harvard Gazette (01/21/16) Leah Burrows

The U.S. Intelligence Advanced Research Projects Activity (IARPA) has granted more than $28 million to Harvard University's John A. Paulson School of Engineering and Applied Sciences, Center for Brain Science, and Department of Molecular and Cellular Biology to develop advanced machine-learning algorithms. The goal is to fulfill IARPA's challenge to discover why brains are so good at learning, and use that information to design computers that can interpret, analyze, and learn information on a human level. Researchers will record activity in the brain's visual cortex, plot out its connections at a previously untried scale, and reverse-engineer the data to develop better algorithms for learning. The project initially will train rats to visually recognize objects on a computer screen, recording their brain activity as they learn. Afterwards, a slice of the trained rat's brain will be imaged by an advanced scanning electron microscope, generating more than a petabyte of data. The data will then be processed by algorithms to reconstruct cell boundaries, synapses, and connections, and visualize them in three dimensions. The next task will be to determine how the brain uses those links to rapidly process information and infer patterns from new stimuli, and from there algorithms for learning and pattern recognition inspired and limited by the connectomics data will be built.


Quantum Computing Is Coming--Are You Prepared for It?
University of Bristol News (01/20/16)

University of Bristol Professor Jeremy O'Brien is part of a European Research Council Ideas Lab delegation that was invited to talk at the World Economic Forum last week in Davos, Switzerland. The discussion examined the future of computing and how new fields of computer science are paving the way for the next digital revolution. O'Brien outlined the current status of quantum computing and its potential applications, and also discussed his architectural blueprint for a manufacturable photonic quantum computer. He says quantum technologies offer ultra-secure communications, sensors of unprecedented precision, and computers that are exponentially more powerful than any supercomputer for a given task. O'Brien's research has focused on using light in its quantum state--the photon--as the key component. "In less than 10 years, quantum computers will begin to outperform everyday computers, leading to breakthroughs in artificial intelligence, the discovery of new pharmaceuticals, and beyond," O'Brien predicts. "The very fast computing power given by quantum computers has the potential to disrupt traditional businesses and challenge our cybersecurity. Businesses need to be ready for a quantum future because it's coming."


Penn Profs Work to Build Bug-Free Computer Programs
The Daily Pennsylvanian (01/21/16) Jinah Kim

University of Pennsylvania professors Benjamin Pierce, Stephanie Weirich, and Steve Zdancewic are working with researchers at the Massachusetts Institute of Technology, and Yale and Princeton universities on the Science of Deep Specification (DeepSpec) project, which will enable developers to build bug-free programs. The project will create precise descriptions of the behavior of software based on formal logic, which also will enable developers to verify their programs behave exactly as intended. Weirich says deep specifications previously have not been a priority for the software industry despite their obvious benefits. "It's hard to create specifications, and for the most part, the software industry has found it not cost-effective at all," Weirich says. "They've kind of abandoned it as an academic enterprise, something that professors do but that is never going to have an impact at all. But we think this is wrong." The DeepSpec project's board of technology industry advisors includes representatives from Microsoft, Google, and Facebook, and one of its goals is to change standards in academics, as well as in the tech industry. "A big part of the pitch of our 'expedition' was a proposal to revamp certain aspects of the undergraduate computer science curriculum,” Pierce says.


Bluetooth and Wi-Fi Sensing From Mobile Devices May Help Improve Bus Service
University of Washington News and Information (01/20/16) Jennifer Langston

University of Washington (UW) researchers say they have developed a low-cost way to use Wi-Fi and Bluetooth signals from bus passengers' mobile phones and devices to harness useful data. The researchers developed sensors costing about $60 for each bus, which detected a unique identifier called a Media Access Control (MAC) address. The system only collected MAC addresses and the time and location they were detected from Bluetooth or Wi-Fi signals, and each address was anonymized for privacy protection. The sensors mounted inside the buses initially picked up more than 20,000 unique addresses from mobile devices, so a key challenge was developing processing algorithms to filter out signals from people who were near the bus but not actually riding it. "That's probably the hardest part of the whole thing," says UW researcher Kristian Henrickson. The researchers eliminated signals that were unreasonably long or short, or that emerged or vanished far from a bus stop, resulting in 2,800 reliable "trips." " Let's say you have a Husky game or Seahawks game and you want to know how much demand changes so you can offer the right level of bus service for this special event," says study senior author Yinhai Wang. "If you can gather enough data from these real-time sensing systems, that's going to offer very valuable information."


Biodegradable Bodies for More Eco-Friendly Robots
Reuters (01/19/16) Matthew Stock

Scientists in Italy are developing "smart materials" that will enable robots to biodegrade like a human body once they've reached the end of their life span. Robots currently are made from components largely consisting of plastic and metal, which are difficult to dispose of and contribute to climate change. The researchers at the Italian Institute of Technology's (IIT) Smart Materials Group are working with substances at the nano level to create new materials that preserve the properties of the individual components, but exhibit characteristics that would not be possible separately. For example, the researchers say their smart materials eventually could replace conventional plastic with a new form of bioplastic made from food waste. IIT's Athanassia Athanassiou notes robotics would be an important application of the technology. "These biodegradable materials, natural materials, they are very flexible so they can be used for robotic skins," Athanassiou says. "But they can be also very hard so they can be used for internal parts of a robot." IIT robot researcher Nikos Tsagarakis says materials other than metal will be needed to build the next generation of humanoid robots. "It will help us to make lighter robots, more efficient and, finally, also recyclable," Tsagarakis says.


CMU, Aquion Energy Partner With Government on Power Grid Research
Pittsburgh Post-Gazette (01/20/16) Daniel Moore

Researchers at Carnegie Mellon University (CMU) and battery-maker Aquion Energy will help the U.S. Department of Energy explore the use of large-scale energy storage to strengthen the power grid's ability to accommodate more solar installations. The $2-million project will focus on the development of smart inverters that convert direct current produced by photovoltaic cells into alternating current consumed in homes and businesses. Researchers will write the algorithm for a program that can digitally connect the systems with automation and control capabilities. CMU professor Soummya Kar, the principal scientist on the project, says smart inverters will essentially act as a remote power switch from the solar panel to a home or to a battery storage unit. He says they will be able to quickly send and receive messages and share data with the consumer and utility, as well as help technicians diagnose maintenance issues. Aquion Energy will supply the batteries for the experiments. The research is planned for the next three years, including a demonstration planned for late 2017 with the National Rural Electric Cooperative Association. It will be tested at one yet-to-be-decided rural cooperative in Pennsylvania, Maryland, or Delaware.


Mining Social Media Can Help Improve Disaster Response Efforts
Penn State News (01/20/16) Liam Jackson

Social media posts can be used to quickly identify areas in need of assistance following a disaster, according to a team led by Pennsylvania State University (PSU) researchers. The team analyzed the September 2013 Colorado floods, accessing more than 150,000 tweets from affected people in Boulder. They used a tool called the CarbonScanner to identify clusters of posts suggesting possible locations of damage, and then analyzed more than 22,000 photos from the area obtained through satellites, Twitter, Flickr, the Civil Air Patrol, unmanned aerial vehicles, and other sources. The researchers were able to collect and analyze images in near real-time using a machine-learning algorithm, enabling them to quickly identify individual pixels of images that contained water. The algorithm "allowed the computer to 'learn' what was and wasn't water," says PSU graduate student Elena Sava. The researchers' findings confirmed Twitter data could help identify hotspots for which satellite imagery should be acquired.


Big Thinking About Big Data
Berkeley Research (01/19/16) Wallace Ravven

University of California, Berkeley professor Michael Jordan believes the key to extracting and analyzing meaning from big data is to focus only on a small subset of relevant data. Jordan is developing concepts that combine computer science with statistics, and he says "bringing the two perspectives together is a win-win." One example of this principle in practice is Jordan's work with colleagues on the "Bag of Little Bootstraps," in which the goal is not only obtaining the answer to a query to a large-scale database, but also placing an "error bar" around the answer, or an estimate of that answer's accuracy, which requires fast computation. "The only way to know if the output from a data analysis is solid enough to provide a basis for a decision is to accompany that output with an error bar," Jordan notes. He says the Bag of Little Bootstraps reveals the "divide and conquer" precept in computer science. "We divide a large dataset into small subsets, compute error bars on each subset, and combine the error bars to obtain an overall error bar," Jordan notes. Each step requires statistical thinking to guarantee the overall error bar is actually right, and Jordan says "in practice this approach can be hundreds of times faster than classical approaches to computing error bars."


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe