Association for Computing Machinery
Welcome to the September 25, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Obama Administration Explored Ways to Bypass Smartphone Encryption
The Washington Post (09/24/15) Andrea Peterson; Ellen Nakashima

An Obama administration task force has probed four possible solutions for technology companies to permit law enforcement to unlock encrypted messages, which, although technically feasible, have pitfalls.  The first measure would entail providers adding a physical, encrypted port to their devices. Companies would keep a separate set of keys to unlock devices, using that port only if law enforcement possessed physical access to a device and secured a court order to compel the company's cooperation.  The second proposal would leverage automatic software updates, in which a court order would mandate the firm insert spyware onto targeted customers' phones or tablets.  The third solution would call for splitting up encryption keys, requiring companies to establish a way to unlock encrypted content, but divide the key into several pieces, to be integrated only under court order.  The last solution, a "forced backup," would have court orders force providers to upload data stored on an encrypted device to an unencrypted location, and require companies to design new backup channels or "substantially" alter existing systems.  The Center for Democracy & Technology's Joseph Lorenzo Hall warns all the solutions are tantamount to installing a "backdoor" by requiring developers to modify their systems to include a clandestine instrument for accessing encrypted content.  Critics say these strategies undermine encryption's security by adding complexity, and some federal officials would prefer tech companies themselves design solutions based on their own systems.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Researchers Trade Algorithms for Photons in Complex Decision-Making Tasks
Motherboard (09/24/15) Michael Byrne

Researchers at the Neel Institute in Grenoble and the National Institute of Information and Communications Technology in Tokyo have developed an alternative to using formal decision-making algorithms to solve problems involving the exploration-exploitation dilemma. The dilemma is an optimization problem, which attempts to balance the costs of acquiring new information with those of using existing information. A classic illustration is the multi-armed bandit problem, in which a gambler has to pick a row of slot machines to play. Algorithms can provide good answers to exploration-exploitation problems, but it is time consuming and does not yield certainties. The researchers' alternative involves using the polarization properties of photons to create an ideal balance between exploration and exploitation. The researchers start out with a polarized beam splitter set at 45 degrees, which results in even odds of producing horizontally or vertically polarized photons, in this situation standing in for exploration and exploitation. Every time a horizontal or vertical photon is registered, the orientation is shifted a bit in that direction, resulting in an ideal balance between exploration and exploitation. The researchers say their solution could have applications ranging from wireless communications to online advertising.


Brain-Computer Link Enables Paralyzed California Man to Walk
Reuters (09/24/15) Steve Gorman

An experimental brain-computer interface developed at the University of California, Irvine (UCI) has restored a paraplegic man's ability to walk without robotic assistance.  The system enables brain signals to bypass the spinal column and be rerouted to electrodes around the patient's knees in order to trigger controlled leg muscle movements.  An electroencephalogram (EEG) cap worn by Adam Fritz read the signals and sent them to a computer to be processed by an algorithm that translated them into muscle stimulation commands.  The commands then were beamed directly to the leg electrodes. The experiment shows it is possible "to restore intuitive, brain-controlled walking after a complete spinal cord injury," says UCI professor Zoran Nenadic.  UCI researcher An Do notes the study's outcomes must be reproduced and significantly refined in other subjects if the technology is to be readied for clinical use.  The hoped-for refinement would involve shrinking the EEG element so it can be implanted within the patient's skull or brain, enabling clearer reception of neural signals and possibly the delivery of pressure sensation from sensors in the foot back to the brain.  Do says the study was based on earlier experiments in which brain signals were beamed to a robotic appendage attached to the patient's legs to generate movement.


Learning Language by Playing Games
MIT News (09/23/15) Larry Hardesty

Massachusetts Institute of Technology (MIT) researchers have developed a computer system that learns how to play a text-based computer game, called Evennia, with no prior assumptions about how language works.  Although the system cannot complete the entire game, it can complete sections of it, suggesting it can discover the meanings of words during its training.  The researchers evaluated the system by comparing its performance to that of two others, which use variants of a technique standard in the field of natural-language processing.  The basic technique is called "bag of words," in which a machine-learning algorithm bases its outputs on the co-occurrence of words, while the variation, called the "bag of bigrams," looks for the co-occurrence of two-word units.  When playing the Evennia game, the MIT system outperformed systems based on both bags of words and bags of bigrams techniques.  The new system used deep learning and relied on two performance criteria: completion of a task in the Evennia game and maximization of a score that factored in several player attributes tracked by the game.  On both measures, the deep-learning system outperformed bags of words and bags of bigrams. "I think...the general area of mapping natural language to actions is an interesting and important area," says Stanford University professor Percy Liang.


Virtual Human Built From More Than 5000 Slices of a Real Woman
New Scientist (09/23/15) Jessica Hamzelou

Researchers at the Worcester Polytechnic Institute (WPI) have combined more than 5,000 slices of a female cadaver using software to create the most detailed digitalized reconstruction of a complete human body for use by the Visible Human Project.  The Foundation for Research on Information Technologies in Society's Silvia Farcito says unlike virtual phantoms created from black and white magnetic resonance imaging and computed tomography scans, "sectioned color images allow you to distinguish virtually all the anatomical structures we are made of."  The model's high resolution makes it ideal for virtual experiments that would be too dangerous, lengthy, and costly for living subjects, according to Harvard Medical School surgeon Ara Nazarian. The model has been made freely available, and it can be tweaked using software that already is employed in labs worldwide. "Creating the phantom took a lot of work, but now anyone can run an experiment on their laptop," Nazarian says.  WPI's Sergey Makarov, who led the research, says the model is being used to test how safe transcranial direct current stimulation (tDCS) is on the brain.  Early outcomes suggest tDCS might generate larger electrical currents when it is used more deeply in the brain and in white matter, which could have ramifications for its application and safety.


The Algorithm That's Hunting Ebola
IEEE Spectrum (09/24/15) Barbara Han

The Ebola outbreak that has ravaged West Africa for nearly two years is the latest example of the dangers posed by zoonotic diseases, which are illnesses that can be transmitted to humans from animals. One of the most difficult aspects of the fight against zoonotic diseases is identifying the animal species that can act as reservoirs for the diseases and pass them on to humans. Disease ecologist Barbara Han has been using computer modeling and machine learning to help predict what species may be reservoirs of zoonotic disease by sorting them based on a variety of characteristics. In a recent study Han conducted with colleagues at the University of Georgia, she fed vast amounts of unstructured data on 80 percent of all known rat species into a machine-learning algorithm, which generated classification and regression trees based on the various traits in the data, such as diet, lifespan, and social structure. Tasked with sorting the rat species into reservoirs and non-reservoirs, the algorithm ended up with 90-percent accuracy, and even identified some species that were later confirmed to be disease reservoirs. The algorithm found reservoir species tend to share traits such as short life cycles and a large geographic range. Han and her colleagues currently are applying the same technique to bats, another animal that frequently acts as disease reservoirs.


IST Professor Part of Effort to Map Aurora Borealis Using Twitter
Penn State News (09/22/15) Stephanie Koons

Pennsylvania State University (PSU) researchers, in collaboration with U.S. National Aeronautics and Space Administration scientists, are working on the Aurorasaurus project, which is exploring how Twitter can be leveraged to help people track aurora sightings.  The researchers say the project is a blend of space weather, citizen, and computer science.  The Aurorasaurus website includes a real-time map tracking Earth observations of auroras via numerous sources, including social media.  The project uses both satellite data and real-time reporting through Twitter Web and mobile app submissions to develop a "now cast" model of when and where the aurora will be visible.  The study collates tweets and investigates the possibility of Twitter for both real-time analysis and mapping of an aurora.  The tweets used in the study were collected from September 2012 to April 2013, and the results suggest Twitter can provide specific details about an individual aurora, as well as accurate real-time indications of where and when it is visible.   The researchers found peaks in the number of aurora-related tweets frequently coincide with geomagnetic disturbances, and the number of daily aurora-related tweets strongly correlates with the strength of the aurora.


Babies Time Their Smiles to Make Their Moms Smile in Return
UC San Diego News Center (09/23/15) Ioana Patringenaru

Researchers at the University of California, San Diego (UCSD), Olin College, and the University of Miami have used robotics to confirm that babies smile at people with a purpose. The project is part of a U.S. National Science Foundation effort to use robots to better understand human development. The researchers first collected data from a study of the behavior of 13 pairs of mothers and infants under the age of four months, including when and how often they both smiled at each other. They then ran this data through an optimal control data analysis, a tool often used in robotics, and found the babies appeared to be smiling intentionally as a way of getting the mothers to smile as much as possible. To test this finding, they used the analysis to program an infant-like robot, called Diego San, which has been used by UCSD's Machine Perception Laboratory to study infant behavior. The researchers then tested their program, as well as three other behavior patterns, on 32 UCSD undergraduates who interacted with Diego San. The researchers found that when Diego San acted like the babies in the study, the undergraduates acted like the mothers, smiling frequently while Diego San did not have to smile as much.


Light-Based Memory Chip Is the First Ever to Store Data Permanently
University of Oxford (09/21/15)

Researchers at Oxford University, the University of Munster, the Karlsruhe Institute of Technology, and the University of Exeter say they have developed the world's first entirely light-based memory chip. The researchers say the chip can store data permanently, and could help dramatically improve the speed of existing computers.  The chip uses the phase-change material Ge2Sb2Te5 (GST) to store data, and it can be made to assume an amorphous state or a crystalline state by using either electrical or optical pulses.  The researchers say the device uses a small section of GST on top of a silicon nitride ridge, or waveguide, to carry light.  They also showed how intense light pulses sent through the waveguide can carefully change the state of the GST.  When light with much lower intensity is sent, the difference in the state of the GST affects how much light is transmitted.  The researchers measured that difference to identify its state, and read off the presence of information in the device as either a 1 or 0.  "This is the first ever truly non-volatile integrated optical memory device to be created, and we've achieved it using established materials that are known for their long-term data retention--GST remains in the state that it's placed in for decades," says University of Oxford researcher Carlos Rios.


4D Technology Allows Self-folding of Complex Objects
Georgia Tech News Center (09/21/15) John Toon

Georgia Institute of Technology (Georgia Tech) researchers, in collaboration with colleagues at the Singapore University of Technology and Design (SUTD), have demonstrated a four-dimensional printing technology that can create complex self-folding structures.  The technology could be used to create three-dimensional (3D) structures that sequentially fold themselves from components that had been flat or rolled into a tube for shipment.  These components could respond to stimuli such as temperature, moisture, or light in a way that is timed to create space structures, deployable medical devices, robots, toys, and other objects.  The technique involves the use of smart shape memory polymers (SMPs), which can remember one shape and change to another programmed shape when uniform heat is applied.  "We...used a spatially uniform temperature, which is easier to apply and then exploited the ability of different materials to internally control their rate of shape change through their molecular design," says Georgia Tech professor Jerry Qi.  The researchers demonstrated the approach in several examples, all of which required the precise control of the folding sequence of different parts of the structure to avoid collisions of the components during folding.  "We are now extending this concept of digital SMPs to enable printing of SMPs with dynamic mechanical properties that vary continuously in 3D space," says SUTD professor Martin L. Dunn.


First Anti-Fraud System to Use Existing Credit Card Readers
Lehigh University (09/21/15) Lori Friedman

A new technique developed by a Lehigh University researcher and colleagues promises to reduce credit card fraud.  Existing magnetic card readers use plain text to store confidential information, which is vulnerable to an untrusted card reader or skimming device.  However, the SafePay technology is designed to transform disposable credit card information into an electrical current and drive a magnetic card chip to simulate the behavior of a physical magnetic card.  The user would download a mobile banking application that communicates with a bank server.  During transactions, the mobile application acquires disposable credit card numbers from the bank server, creates a wave file, plays the file to generate electrical current, and then drives the magnetic card chip via an audio jack or Bluetooth.  The researchers say the magnetic credit card chip makes SafePay compatible with existing readers and the mobile banking application automates the process, making it user-friendly. The approach is inexpensive and "will greatly relieve the burden of merchants in replacing card readers," says Lehigh University professor Yinzhi Cao.


A UC3M Study Analyzes the 'Virality' of Twitter in Electoral Processes
Carlos III University of Madrid (Spain) (09/21/15)

Twitter reached a critical mass of Spanish users during the Catalan elections of 2010, prompting researchers at the Carlos III University of Madrid to study tweets during the last European elections in 2014.  The team detected two levels of conversation with few points of connection, according to Luz Congosto from the Telematic Engineering Department.  One conversation involved political marketing in tweets about the campaigns or the candidates. The political marketing used structures of messages with hash tags and mentioned the candidates' Twitter accounts. "The other [dialogue] is a citizens' conversation, which is freer, and mentions users by their names with a more natural and fresher language," Congosto says.  In addition to the differences in structure, there also was a variance in themes, as political marketing focused more frequently on the campaign, the electoral system, and corruption, unlike the citizens' conversations, which used more dismissive insults and were more influenced by the controversies that occurred during the campaign. Citizens commented on new reports involving the various candidates and gave their opinions on how they performed during televised debates.  Although Twitter users are generally young people and do not accurately represent society, Congosto says the political parties and candidates still should listen to their conversation on the social network.


Machine Unlearning: How Can Information Be 'Forgotten' in the Age of Viral Data Spread?
The Stack (UK) (09/22/15) Martin Anderson

Building in mechanisms to enable computer systems to forget data when users request it was the focus of a recent paper by Columbia University researchers Yinzhi Cao and Junfeng Yang. The paper suggests basic alterations to how user data enters and engages with analytical systems and data streams. Taken together, the raw data, computations, and derived data form a data propagation network known as the data's lineage, according to the paper's authors. The system they describe prevents cloned data, both source and "calculated," from becoming orphaned from its affiliated permissions by interposing a "summation model" between it and the systems which access it. The learning systems that apply the data do so via these proxies, and if the proxies are revised or deleted, the data itself becomes unavailable, and its replicated iterations into other systems is both unidentifiable and non-viable. Moreover, the summation methodology does not require a bottom-up rethinking of existing systems. The researchers projected it onto real-world data analysis models, retrofitting 20 to 300 lines of code, accounting for less than 1 percent of the existing system.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe