Welcome to the October 3, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.
Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
How to Steal the Mind of an AI: Machine-Learning Models Vulnerable to Reverse Engineering
The Register (UK) (10/01/16) Thomas Claburn
Machine-learning (ML) models can be reverse engineered, and basic safeguards are little help in ameliorating attacks, according to a paper presented in August at the 25th Annual Usenix Security Symposium in Austin, TX, by researchers from the Swiss Federal Institute of Technology in Lausanne, Cornell University, and the University of North Carolina at Chapel Hill. The investigators exploited the fact that such models permit input and may yield predictions with percentages indicating the confidence of correctness. The researchers say the models demonstrated "simple, efficient attacks that extract target ML models with near-perfect fidelity for popular model classes, including logistic regression, neural networks, and decision trees." The team successfully tested their attack on BigML and Amazon Machine Learning. Cornell Tech professor Ari Juels says attack mitigation may be possible, but he suspects "solutions that don't degrade functionality will need to involve greater model complexity." Although many ML models have been open sourced to encourage users to improve their code and implement the models on the developers' cloud infrastructure, other models rely on confidentiality. The researchers note ML reverse engineering can violate privacy, such as by making it easier to identify images of people used to train a facial-recognition system.
Drone Learns 'to See' in Zero-Gravity
Delft University of Technology (09/27/16)
A small drone has learned how to determine distances using only one eye during an experiment aboard the International Space Station (ISS). The Synchronized Position Hold Engage and Reorient Experimental Satellite drone started navigating inside the ISS while recording stereo-vision information on its surroundings from its two camera "eyes," and then learned about the distances to walls and nearby obstacles. When the stereo-vision camera was switched off, the drone could start autonomous exploration using only one camera. Humans effortlessly estimate distances with one eye, but it is not clear how they learn this capability and how robots should learn to do this. Machine learning is not considered a reliable approach to autonomy in space applications, but this method, based on the self-supervised learning paradigm, has a high degree of reliability and helps drone autonomy. The self-supervised learning algorithm used in the experiment was tested on quadrotors at Delft University of Technology's CyberZoo in the Netherlands. Participants in the experiment say the finding is a further step in the quest for truly autonomous space systems. The experiment was designed in collaboration with the Advanced Concepts Team of the European Space Agency, the Massachusetts Institute of Technology, and Delft University's Micro Air Vehicles lab.
A Combination of Machine Learning and Game Theory Is Being Used to Fight Elephant Poaching in Uganda
Quartz (10/03/16) Ananya Bhattacharya
Technology that integrates machine learning and game theory, called Protection Assistant for Wildlife Security (PAWS), is being tested in Uganda as a way to fight elephant poachers. PAWS is designed to enable researchers to predict poacher attacks so they can advise rangers on what areas to patrol. To generate the predictions, researchers analyzed 12 years worth of data collected by rangers supplied by the Wildlife Conservation Society. University of Southern California professor Milind Tambe says the data is sufficient to enable a machine-learning algorithm to make intelligent guesses about future poaching strikes. "We want to randomize our patrols because we ourselves don't want to become predictable to the poachers," he says. Game theory is thus tapped to suggest routes that will not be easily predictable, Tambe notes. Ugandan rangers have used PAWS to locate 10 antelope traps and elephant snares in the past month, which Reuters says is "a far better score card than they could usually expect." Tambe notes his artificial intelligence-game theory solution has been used by the U.S. Coastguard, the Transportation Security Administration, the Federal Air Marshals Service, the Los Angeles Sheriff's Department, and other organizations to randomize their patrols since the early 2000s.
Contract Expiration to End U.S. Authority Over Internet IP Addresses
The Washington Post (09/30/16) Craig Timberg
Saturday, Oct. 1, 2016, officially marked the end of the U.S. government's control over the Internet's most basic functions with the expiration of its contract with the nonprofit Internet Corporation for Assigned Names and Numbers (ICANN). ICANN's executives and board of directors will now defer to what the organization calls the Internet's "stakeholder community"--a loosely defined combination of corporate interests, government officials, activists, and experts diffused across four international entities. The withdrawal of U.S. governance over Internet Protocol addresses has provoked accusations the Obama administration has ceded the last remnants of a critical oversight mechanism. However, ICANN vice president Christopher Mondini says the U.S. government's oversight "was more symbolic than practical," noting the intent has always been to allow the contract to expire. However, ICANN advisory board member Garth Bruen has reservations, noting "there's no checks and balances anymore." Supporters of ending the U.S. government's role over ICANN, which include most major technology and telecommunications companies, say the many interests will work together to keep the Internet stable and free. "There is absolutely no way that this is going to imperil freedoms," says the Center for Democracy and Technology's Matthew Shears. "There is absolutely no way that this is going to allow Russia or Iran or anybody to take control of the Internet."
White House to Bolster STEM Education, Close Skills Gap
CIO (09/29/16) Kenneth Corbin
The Obama administration has launched a new advanced placement (AP) computer science (CS) course as part of an interdisciplinary approach to improve science, technology, engineering, and math (STEM) education for underrepresented students and address a shortage of skilled workers. White House Office of Science and Technology Policy adviser Ruthe Farmer says the administration has rolled out efforts such as TechHire and the computer-science-for-all programs as part of its push to inject an additional 100,000 STEM teachers into schools over the next 10 years. Despite that, Farmer acknowledges "we need to really, really level up our focus on underrepresented students--rural students, students with disabilities, students of color, and women's participation." The administration notes CS classes are lacking in K-12 curricula, as an estimated 75 percent of schools last year did not offer a CS class with a programming unit. The new AP course, developed in conjunction with the U.S. National Science Foundation, seeks to close this gap by combining creative aspects of programming, cybersecurity, and ways computing can help meet practical challenges. IBM's Stanley Litow says across the U.S., the key factor in the lack of STEM education diversity is income. "This isn't just about K to 12, this is about making sure that people have the skills to take the jobs that are available in the United States," he says.
Algorithm Could Enable Visible-Light-Based Imaging for Medical Devices, Autonomous Vehicles
MIT News (09/29/16) Larry Hardesty
Researchers in the Massachusetts Institute of Technology's Media Lab have created a method for retrieving visual information from scattered light, which they say could lead to medical-imaging systems that use visible light or computer-vision systems that function in poor visibility. The researchers beamed a laser through a sheet of plastic with slits cut through it in a certain configuration, and then through a 1.5-centimeter "tissue phantom" that imitated the optical properties of human tissue. Light scattered by the phantom was culled by a high-speed camera that measured the light's arrival time. An algorithm used this information to rebuild a precise image of the configuration cut into the plastic "mask." The system's laser discharges ultrashort bursts of light, and the high-speed camera can differentiate between arrival times of different groups of photons. Earlier methods have tried image reconstruction using only unscattered photons, but the new technique, known as all-photons imaging, uses the complete optical signal. By determining how the image intensity changes in time, the algorithm estimates the extent of light scattering, and then considers each pixel of each successive frame and calculates the likelihood that it corresponds to any given point in the visual field. On a frame-by-frame basis, the algorithm predicts the image's appearance.
Researchers Make Progress Toward Computer Video Recognition
IDG News Service (09/29/16) Agam Shah
Google researchers are working to advance video-recognition technology, which Rajat Monga on Google's Brain team says is due to progress in deep-learning models. "With the sequence of frames in each video that are related to each other, it provides a much richer perspective of the real world, allowing the models to create a [three-dimensional] view of the world, without necessarily needing stereo vision," Monga says. He concedes true human-like vision using video recognition is "still far away" because computers can only recognize some, but not all, objects in images. Computers need to be educated to recognize images in deep-learning models, and big datasets can be used to cross-reference items in pictures. Monga says Google researchers currently are studying how deep learning could help robots with hand-eye coordination and learning via predictive video. But he notes although deep learning is improving thanks to faster computing, algorithms, and datasets, more progress is required. Monga says the emergence of faster hardware and custom chips such as Google's machine-learning Tensor Processing Unit have helped advance deep learning. Low-level calculations on graphical-processing units are fueling most deep learning models today, but faster hardware is expected to accelerate learning and deduction.
Wireless, Freely Behaving Rodent Cage Helps Scientists Collect More Reliable Data
Georgia Tech News Center (09/28/16) Jason Maderer
Georgia Institute of Technology (Georgia Tech) researchers have developed a wirelessly energized cage for performing experiments on rodents. The system does not use interconnect wires or batteries to power electronic devices and sensors, which are typically used for scientific experiments. Called Energized Cage (EnerCage), the system is wrapped with carefully oriented strips of copper foils that can inductively power the cage and the electronics implanted in, or attached to, one or more animal subjects. The researchers say the system can run indefinitely and collect data without human intervention. EnerCage makes use of Kinect video game technology, a high-definition camera, an infrared depth camera, and four microphones to track and record animal behavior. The team's algorithms will determine if the animal is standing, sitting, sleeping, grooming, eating, drinking, or doing nothing. "We're hoping to reduce the expensive costs of new drug and medical device development by allowing machines to do mundane, repetitive tasks now assigned to humans," says Georgia Tech professor Maysam Ghovanloo. The team also is working with Emory University to improve the clinical effectiveness of deep brain stimulation.
Quantum Computing Advances with Researchers' Control of Entanglement
Researchers from the University of Tokyo in Japan say they have used laser light to develop a precise, continuous control technology that provides 60 times more success in sustaining the lifetime of quantum bits (qubits) compared with previous methods. The researchers say their new technique allows for the continuous creation of quantum behavior and entangling more than 1 million different physical streams. In addition, they believe their breakthrough was limited by a lack of data storage space. The researchers note entangled quantum particles are a resource of quantum information processing, and they say harnessing them could produce a new era of information technology. "The most difficult aspect of this achievement was continuous phase locking between squeezed light beams, but we have solved the problem," says University of Tokyo researcher Akira Furusawa. Going forward, the researchers want to develop two- and three-dimensional lattices of the entangled state. "This will enable us to make topological quantum computing, which is very robust quantum computing," Furusawa says.
Connecting Data Scientists With Regional Challenges
National Science Foundation (09/28/16) Aaron Dubrow
The U.S. National Science Foundation (NSF) on Wednesday announced $10 million in awards to 10 "Big Data Spokes" projects to investigate subjects identified by the four Big Data Regional Innovation Hubs (BD Hubs), which represent consortia from the Midwest, Northeast, Southern, and Western U.S. The agency also is allocating another $1 million for planning efforts and Early-Concept Grants for Exploratory Research awards supporting the U.S.'s big data innovation ecosystem. "The BD Spokes advance the goals and regional priorities of each BD Hub, fusing the strengths of a range of institutions and investigators and applying them to problems that affect the communities and populations within their regions," says NSF's Jim Kurose. The BD Spokes will convene and coordinate regional research efforts, with a focus on organizing stakeholders, engaging end users and solution providers, and forming multidisciplinary teams. One of the BD Spokes projects is a multi-institutional initiative to develop a data-licensing approach and automated platform so individuals and organizations can share data while complying with licensing strictures. Another project will explore using data from diverse sources, including fitness trackers and environmental monitors, to enhance patient care.
Optimization Technique Identifies Cost-Effective Biodiversity Corridors
Georgia Tech News Center (09/27/16) John Toon
Georgia Institute of Technology (Georgia Tech) researchers used a new computer-based method to identify cost-effective ways to simultaneously connect isolated populations of wolverines and grizzly bears. They say the approach could revolutionize the process of designing animal habitat corridors for rare, threatened, and endangered species living in protected areas. Georgia Tech professor Bistra Dilkina and colleagues used an optimization program based on mixed-integer programming that can consider many more options than a human can do alone. The researchers say the approach produced optimized corridor planning that was within 14 percent and 11 percent of the best level of connectivity for wolverines and grizzly bears, respectively, while saving 75 percent of the cost. The researchers note the optimization program uses data on "resistance to movement" for each species, the cost of acquiring tracts of land, and the location of protected areas. They say the technique could have broad applicability for providing connections between protected areas at multiple scales, from evaluating local easement options to developing national strategies. The team wants to produce a computer-based tool that can be made available to conservation organizations and biologists.
Chameleon: A Bridge to XSEDE's 'Bridges'
Texas Advanced Computing Center (09/26/16) Faith Singer-Villalobos
The Texas Advanced Computing Center (TACC) and the University of Chicago, with funding from the U.S. National Science Foundation (NSF), last year developed Chameleon, TACC's first system dedicated to cloud computing for computer science research. The $10-million system is an experimental testbed for cloud architecture and applications, specifically for the computer science domain. "Cloud computing infrastructure provides great flexibility in being able to dynamically reconfigure all or parts of a computing system so that it can best suit the needs of the applications and users," says Pittsburgh Supercomputing Center (PSC) researcher Derek Simmel. However, he notes this flexibility comes with many options for monitoring and managing the resources, which is where having an experimental facility such as Chameleon is helpful. Simmel works on PSC's Bridges platform, an NSF-funded Extreme Science and Engineering Discovery Environment resource for empowering new research communities and bringing together high-performance computing and big data. "Keeping up with new developments and changes in the way one operates all the component cloud services is a considerable burden to cloud system operators--the learning curve remains fairly steep, and all the expertise required for a traditional computing facility needs to be available for cloud-provisioned systems as well," Simmel says.
Abstract News © Copyright 2016 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: firstname.lastname@example.org