Association for Computing Machinery
Welcome to the April 1, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


U.S. Textile Industry Turns to Tech as Gateway to Revival
The New York Times (04/01/16) Steve Lohr

A $320-million project announced on Friday involves a collaboration between textile manufacturers, technology firms, academic institutions, and the U.S. Defense Department to bring the textile sector into the digital era by embedding various sensors and semiconductors into fabrics. "This is about reimagining what a fabric is, and rebirthing textiles into a high-tech industry," says Massachusetts Institute of Technology professor Yoel Fink. The Advanced Functional Fabrics of America project is designed to dovetail with the Internet of Things by equipping fabrics with electronics to perform tasks that include monitoring the wearer's health and storing energy. The Defense Department wants the project to yield new combat uniforms capable of changing color to identify friendly or unfriendly soldiers, or to blot out a soldier's visibility to foes with night-vision goggles, to name two examples. Project organizers also anticipate establishing about 24 startup incubators to commercialize smart-fabric products that can be manufactured at existing U.S. mills. Marty Lawrence with apparel maker VF Corporation says the company used to leverage research from universities and its suppliers, instead of supporting an actual research-and-development operation. It has since set up four innovation centers and hired researchers to explore new fabric applications.


Fooling the Machine
Popular Science (03/30/16) Dave Gershgorn

A growing field of research suggests artificial intelligence (AI) can easily be fooled, as it can provide correct answers without truly understanding information. Attackers could theoretically use this vulnerability to feed deceptive or "adversarial" data to machine-learning systems so they make potentially disastrous conclusions, but some researchers say accounting for this danger early in the AI development process could help address it. In one experiment, researchers altered images entered into a deep neural network by only 4 percent, and successfully fooled it into misclassifying the images 97 percent of the time. Another research team fed false inputs to an image classifier and observed the decisions the machine made to reverse-engineer the algorithm to deceive an image-recognition system that could potentially be used in driverless vehicles. Meanwhile, a team at Georgetown University and another at the University of California, Berkeley developed algorithms that can issue speech commands for digital personal assistants that are unintelligible to human ears, so the assistants perform actions not intended by their owners. These research groups have determined how to retrain their classifier networks so the language-recognition systems can protect against such attacks. The method involves feeding the networks both legitimate and adversarial input.


Could AlphaGo Bluff Its Way Through Poker?
Technology Review (03/30/16) Will Knight

University College London (UCL) lecturer David Silver suggests software similar to Google DeepMind's AlphaGo, which recently won a Go tournament against a human grandmaster, could be crafted to play poker competently. In collaboration with UCL student Johannes Heinrich, Silver utilized deep reinforcement learning to generate an effective playing strategy in both Leduc and Texas hold'em. With Leduc, the algorithm achieved a Nash equilibrium, or an optimal approach as defined by game theory. In Texas hold'em, the algorithm reached the performance of an expert human player. AlphaGo's triumph against the human Go champion was rooted in its use of deep reinforcement learning and tree search to arrive at successful Go maneuvers. The former technique entails training a large neural network with positive and negative rewards, and the latter consists of a mathematical approach for looking ahead in a game. Meanwhile, Google DeepMind and the University of Oxford are training a neural network to play the fantasy card games Magic: The Gathering and Hearthstone. The effort involves giving the network the ability to interpret the information displayed on each card, which may either be structured or unstructured.


To Make Computers Better, Let Them Get Sloppy
New Scientist (03/30/16) Paul Marks

Rice University's Krishna Palem says he wants to address the trade-off between computing performance and energy efficiency with a proposal "to alter the computer itself to give you cheaper but slightly less accurate answers." Palem designed a probabilistic and deliberately unstable version of complementary metal-oxide semiconductor technology that saves power at the cost of precision. In a test of the concept, Palem's team built a digital video decoder that interpreted the least significant bits imprecisely when rendering pixel data as screen colors; little loss in image quality was noticed by human viewers. The team is now experimenting with hearing aids, and initial tests found imprecise digital processing can cut power consumption by half while only producing a minor reduction in intelligibility. University of Oxford researcher Tim Palmer thinks Palem's energy-efficient chips could be integral to the creation of computers that can make more accurate climate change models. "If we can reduce the number of bits that you need to do calculations, that would have an enormous impact on energy consumption," he notes. To meet the challenge of selecting which bits to downgrade, researchers are developing methods to code specific accuracy thresholds so programmers can say when and where errors are acceptable. Other uses that may benefit from inexact computing include accident-investigation simulation.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Wireless Tech Means Safer Drones, Smarter Homes, and Password-Free Wi-Fi
MIT News (03/31/16) Adam Conner-Simons

Researchers at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory led by professor Dina Katabi have developed a system called Chronos, which enables a single Wi-Fi access point to locate users to within tens of centimeters, without any external sensors. Chronos locates users by calculating the "time-of-flight" that it takes for data to travel from the user to an access point. The researchers say the system is 20 times more accurate than existing systems, and can compute time-of-flight with an average error of 0.47 nanoseconds, or less than half of a billionth of a second. The researchers exploit the fact that Wi-Fi uses an encoding method that transmits bits of packets on several small frequencies. In addition, if a person is indoors, the Wi-Fi signals can bounce off walls and furniture, meaning the receiver gets several copies of the signal that each undergo different times-of-flight. To identify the actual direct path, the researchers developed an algorithm to determine the delays experienced by all of the copies. Using that data, they can identify the path with the smallest time-of-flight as the direct path. The researchers note Chronos could be used for other situations in which there are limited or inaccessible sensors, such as locating lost devices or controlling large fleets of drones.


'Mixed Reality' Technology Brings Mars to Earth
NASA News (03/30/16) Elizabeth Landau

Researchers at the U.S. National Aeronautics and Space Administration's (NASA) Jet Propulsion Laboratory (JPL) and Microsoft are working to develop a guided tour of an area of Mars with Apollo 11 astronaut Buzz Aldrin in a "mixed reality" interactive exhibit to open this summer called "Destination: Mars." The exhibit uses the Microsoft HoloLens mixed-reality headset to merge virtual elements with the user's actual environment, creating a world in which real and virtual objects interact. Guests will "visit" several sites on Mars, reconstructed using imagery from NASA's Curiosity Mars rover. Aldrin will serve as "holographic tour guide" on the journey, and Curiosity Mars rover driver Erisa Hines will lead participants to places on Mars where scientists have made important discoveries. "We're excited to give the public a chance to see Mars using cutting-edge technologies that help scientists plan Curiosity's activities on Mars today," says Destination: Mars project manager Jeff Norris. JPL researchers also are developing mixed-reality applications in support of astronauts on the International Space Station and engineers responsible for the design and assembly of spacecraft. "By connecting astronauts to experts on the ground, mixed reality could be transformational for scientific and engineering efforts in space," Norris says.


Human Brain Project's Research Platforms Released
Swiss Federal Institute of Technology in Lausanne (03/30/16)

The Human Brain Project (HBP) announced the release of initial versions of its six information and communications technology (ICT) platforms to users outside the initiative. The platforms, comprising brain simulation, high-performance computing, medical informatics, neuromorphic computing, and neurorobotics, are designed to help the scientific community accelerate progress in neuroscience, medicine, and computing. The newly released platforms consist of prototype hardware, software tools, databases, and programming interfaces, which will be refined and expanded in a collaborative approach with users. The public release of the platforms indicates the end of the Ramp-Up Phase of the HBP and the beginning of the Operational Phase. The HBP platforms are designed to help researchers share data and results, as well as leverage advanced ICT capabilities. The platforms should, for example, enable closer collaboration between scientists to create more detailed models and simulations of the brain. A first step in opening up the platforms to the wider scientific community has already been taken via the funding of the first HBP Partnering Projects through the European Union's FLAG-ERA 2015 Joint Transnational Call. The platforms can be accessed via the HBP Collaboratory, a Web portal where users also can find guidelines, tutorials, and information on training seminars.


Open Source Microprocessor
ETH Zurich (03/30/16) Fabio Bergamin

Researchers at ETH Zurich and the University of Bologna have developed PULPino, an open-source microprocessor they say will make it easier and less expensive for developers to build wearable microelectronic devices and chips for the Internet of Things (IoT). PULP stands for parallel ultra low power, and the arithmetic functions the microprocessor can perform also are open source. The researchers made the processor compatible with an open-source instruction set, RISC-V, developed at the University of California, Berkeley. PULPino is designed for use with battery-powered devices with extremely low energy consumption, such as chips for small devices like smartwatches, sensors for monitoring physiological functions, or sensors for the IoT. "Using the PULPino processor, we are developing a smartwatch equipped with electronics and a micro camera," says ETH Zurich professor Luca Benini. "It can analyze visual information and use it to determine the user's whereabouts." The researchers want to work with other project partners to jointly develop academically interesting extensions to PULPino, which also would be open source, so the number of functional components could grow.


Wireless-Powered Network Gets Its Fair Share
A*STAR Research (03/30/16)

Many sensors currently rely on battery power, which can limit their use in smart device networks. One alternative is to build a wireless-powered communication network (WPCN) containing sensors that harvest energy from radio waves transmitted by the central hub. Supercapacitors offer a promising way to store this energy, because they are smaller and charge more quickly compared to rechargeable batteries. However, supercapacitors cannot store energy for long periods because they tend to self-discharge. Chin Keong Ho of Singapore's Agency for Science, Technology and Research (A*STAR) and colleagues say they have developed a strategy to solve this problem. They calculated the best ways to schedule transmissions around a network of sensors fitted with supercapacitors, so each sensor was sure to have the energy it needed to send its data back to the hub. The researchers developed an algorithm that described the optimal solution and a second algorithm to minimize the total charging and transmission time needed to communicate once with every sensor in the network. The algorithm also accounts for differences in the quality of the communication link between different sensors. In the future, the algorithms should help design more efficient WPCNs, and the researchers are now testing them on wireless-power prototypes in the lab.


Seventh-Graders Learn Astrophysics Through Mixed-Reality Computer Simulation
University of Illinois News Bureau (03/30/16) Sharita Forrest

Researchers at the University of Illinois (UI) have developed MEteor, a mixed-reality computer simulation to teach young students about concepts of physics such as planetary motion and gravitational acceleration. MEteor merges virtual reality with the physical world so participants interact physically with digital objects. For example, students physically acting the part of an asteroid traveling through space would select their speed and angle using a virtual spring launcher, laser-scanning technology would track their position as they move across the platform, and real-time data would be projected onto an adjoining wall along with basic instructions. The researchers tested the system on students and found their scores were significantly higher than those using a standard desktop computer. The use of familiar, everyday physical motions may make physics concepts more accessible and relevant for young people today than conventional classroom physics instruction, says UI professor Robb Lindgren, the principal investigator on the project. "That's really exciting, and I think it suggests that more research and experimentation with these kinds of immersive, interactive environments is going to be important if we want to increase the science, technology, engineering, and mathematics workforce," he notes.


Internet on a Chip: Researchers Step Towards Energy-Efficient Multicore Chips
Carnegie Mellon University (03/29/16)

Researchers at Carnegie Mellon and Washington State universities say they have identified a new approach of enabling energy-efficient multicore systems. The researchers used wireless on-chip communication between individually controllable clusters to provide an efficient communication backbone, which can be tailored for large-scale multicore systems. As the number of cores packed into a single chip rises, scalable power management strategies are needed to keep power under prescribed limits. Voltage frequency islands (VFIs) usually are used to enable such strategies because they divide a system into islands with individually adjustable voltage and frequencies to reduce the power within allowable performance penalties. However, the main challenge of VFI-based designs is that on-chip communication costs negatively impact application performance. The researchers say they have developed two innovative solutions to this problem, the first of which is accomplished through VFI-clustering methodology. A hybrid VFI clustering that combines both per-VFI utilization and inter-VFI communication enables minimal inter-VFI communication without greatly increasing the inter-cluster utilization variation. The researchers also used a small-world wireless Network on Chip to enable fast and energy-efficient on-chip communication, which can mitigate most of the performance penalties associated with VFIs.


Bringing Big Neural Networks to Self-Driving Cars, Smartphones, and Drones
IEEE Spectrum (03/28/16)

At the IEEE International Solid-State Circuits Conference in February, teams from the Massachusetts Institute of Technology (MIT), Nvidia, and the Korea Advanced Institute of Science and Technology (KAIST) demonstrated prototypes of low-power chips designed to run artificial neural networks that could help enable advanced perceptual and predictive abilities. MIT professor Vivienne Sze says increasing the scale of neural networks also increases their power consumption, with the main power drain being the transfer of data between processor and memory. In collaboration with fellow MIT professor and Nvidia researcher Joel Emer, Sze has developed Eyeriss, a custom chip designed to run a cutting-edge convolutional neural network. The network can run the AlexNet algorithm using 0.3 watts of power instead of the 5 watts to 10 watts consumed by a typical mobile graphics-processing unit. The chip saves energy by placing a dedicated memory bank close to each of its 168 processing engines, and it retrieves data from a larger primary memory bank as infrequently as possible. Meanwhile, KAIST professor Lee-Sup Kim says circuits designed for neural network–driven image analysis also will help with airport face-recognition systems and robot navigation. His lab demonstrated a chip designed to be a general visual processor for the Internet of Things, which minimizes data movement by bringing memory and processing closer together. It uses only 45 milliwatts of power and lightens the computational load as well.


Savage: Comprehensive Software Security for Cars Will Take Years
Network World (03/31/16) Tim Greene

University of California, San Diego professor Stefan Savage, who this week was named to receive the 2015 ACM-Infosys Foundation Award in the Computing Sciences, says a comprehensive security architecture for automobiles will take another three to four years to be realized. He says the auto industry has yet to recognize the seriousness of the issue, and the missing ingredient is the capability to update software routinely to patch newly spotted vulnerabilities. Moreover, Savage says a single team must be delegated to oversee all the software running on individual components in cars to ensure their security and that of their interfaces with other devices. "Almost all the vulnerabilities we found were at the interfaces of code written by different parties," he notes. Savage also cites the failure to strip software written with off-the-shelf-code of potential vulnerabilities. He says unless an overarching vetting of software is in place, weaknesses can sneak in with aftermarket components. Savage notes self-driving cars will introduce entirely different security challenges from non-automated automobiles, and keeping them secure via traditional strategies will be a problem, given their heavy reliance on software. Savage also hopes his ACM-Infosys Foundation Award will help raise awareness of a continuing study to empirically measure the efficacy of various means of combating cyberattacks.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe