Welcome to the April 29, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.
Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
ITIF Report Aims to Sway Congress, Promote National HPC Agenda
HPC Wire (04/28/16) John Russell
A new report from the Information Technology and Innovation Foundation (ITIF) makes policy recommendations for a U.S.-led high-performance computing (HPC) push that also supports the National Strategic Computing Initiative (NSCI). "As competitor nations rapidly scale up their investments in and applications of high-performance computing, America will need concerted public and private collaboration and investment to maintain its leading position in both HPC production and application," the report advises. The ITIF report offers examples of HPC's impact on U.S. industry while also summarizing major global HPC leadership efforts, such as China's plan to put two 100-petaflop computers into operation this year. Besides infusing some much-needed energy into NSCI, the report calls on Congress to reform export control regulations to match the circumstances of current HPC systems. In addition, ITIF urges the Obama administration to continue making technology transfer and commercialization initiatives a priority for U.S. national laboratories, stressing HPC in federal worker training and retraining programs, and emphasizing HPC in relevant Manufacturing Extension Partnership engagements. IBM's David Turek suggests tweaking some of the report's recommendations. He says U.S. industrial leaders should participate to "establish the list of industry grand challenges so there is a direct linkage between the activities of NSCI and its impact on competitiveness."
Nvidia GPU-Powered Autonomous Car Teaches Itself to See and Steer
Network World (04/28/16) Steven Max Patterson
An Nvidia engineering team built an autonomous car that combines a camera, a Drive-PX embedded computer, and 72 hours of training data. The researchers trained a convolutional neural network (CNN) to map raw pixels from the camera directly to steering commands. Three cameras and two computers were utilized by the training system to obtain three-dimensional video images and steering angles from the vehicle driven by a human. Nvidia researchers watched for changes in the steering angle as the training signal mapped the human driving patterns into bitmap images recorded by the cameras, and learning was enabled using the CNN to generate the internal representations of the processing steps of driving. The open source machine-learning system Torch 7 was used to render the learning into the processing steps that autonomously saw the road, other vehicles, and obstacles to steer the test vehicles. The steering directions the CNN performed in a simulated response to the 10-frames-per-second images captured by the human-driven car were compared to the human steering angles, teaching the system to see and steer. On-road testing proved CNNs can learn the task of lane detection and road following without manually and explicitly deconstructing and classifying road or lane markings, semantic abstractions, path planning, and control.
NIST Kicks Off Effort to Defend Encrypted Data From Quantum Computer Threat
NIST News (04/28/16) Chad Boutin
A new report from the U.S. National Institute of Standards and Technology (NIST) focuses on how to protect encrypted data from quantum decryption via a long-term plan. "If and when someone does build a large-scale quantum computer, we want to have algorithms in place that it can't crack," says NIST mathematician Dustin Moody. He notes a key near-term recommendation is for organizations to concentrate on "crypto agility," or the rapid ability to replace whatever algorithms they are using with newer and safer ones. Moody says the creation of these new algorithms is the longer-term objective. He says the initiative will involve open public collaboration, which will be formally rolled out in the next several months while resembling past contests. "It will be a long process involving public vetting of quantum-resistant algorithms," Moody says. "And we're not expecting to have just one winner. There are several systems in use that could be broken by a quantum computer--public-key encryption and digital signatures, to take two examples--and we will need different solutions for each of those systems."
Elon Musk's Artificial Intelligence Group Opens a 'Gym' to Train AI
Popular Science (04/27/16) Dave Gershgorn
The OpenAI Gym platform is a collaborative effort between entrepreneur Elon Musk, Y Combinator's Sam Altman, and former Google research scientist Ilya Sutskever to perform ambitious artificial intelligence (AI) research while publishing and open-sourcing almost all of their output. The founders say the goal is to enable a de facto standard for benchmarking certain types of AI algorithms. The main thrust of the effort is to promote algorithms that excel at generalization, and thus are highly versatile. "It's not just about maximizing score; it's about finding solutions which will generalize well," says OpenAI Gym's submission documentation. "Solutions which involve task-specific hardcoding or otherwise don't reveal interesting characteristics of learning algorithms are unlikely to pass review." The platform concentrates on reinforcement learning, in which a well-performing algorithm is rewarded, while one that fails is incentivized to try a different approach to the task at hand. The concept is researchers build their algorithms, and then put them in various settings. They can then view how their algorithm performs in an objective test, make tweaks, and publish their benchmarks for the rest of the community to see.
Twitter's Artificial Intelligence Knows What's Happening in Live Video Clips
Technology Review (04/28/16) Will Knight
A team of artificial intelligence (AI) researchers at Twitter has developed an algorithm that can instantly recognize what is happening in live video. The algorithm can tell, for example, if the star of a clip is playing guitar, demonstrating a power tool, or is a cat performing for viewers. The AI team, known as Cortex, effectively built a custom supercomputer using only graphics processing units (GPUs) to perform the video classification and serve up the results. Cortex is using deep learning to recognize the activity in clips. The team wants to develop a sophisticated recommendation system to help filter and curate all sorts of content shared through the service, based on a user's previous activity. The video-recognition technology has not made its way into any of Twitter's products, but the team is testing it on Periscope, an app owned by Twitter that lets users transmit live video from their smartphones. The technology could help Twitter tailor ads to content more efficiently and filter out copyrighted content and undesirable content such as violence.
Aspen Elementary, Los Alamos Middle School Students Take Top Award in 26th New Mexico Supercomputing Challenge
Los Alamos National Laboratory News (04/27/16) Steve Sandoval
A team of elementary and middle school students won first place in the New Mexico Supercomputing Challenge for their project, "Solving the Rubik's Cube 2.0." The students created a three-dimensional simulation of a Rubik's cube, as well as an implementation of a cube-solving algorithm. "The goal of this year-long event is to teach student teams how to use powerful computers to analyze, model, and solve real-world problems," says Supercomputing Challenge executive director David Kratzer. The second-place project examined techniques to enable efficient computer play of Yavanchlan, a variation based on the board game Yavalanchor. The third-place project aggregated data from thousands of weather stations around the world, which were then processed and analyzed with a Python program designed to find climate trends. The Supercomputing Challenge helps New Mexico high school graduates go to college in science, technology, engineering, and mathematics areas. In addition, the challenge improves the information-based economy of the state by promoting computational thinking, helps middle and high school students meet academic standards with academic excellence, and promotes collegiality and creates excellent professional development among the community of educators.
Security Breach in Israeli-Made Waze Lets Hackers Stalk Users
Times of Israel (04/27/16) Stuart Winer
University of California, Santa Barbara professor Ben Zhao and colleagues have demonstrated a way to breach the popular Waze road navigation application. The researchers were able to reverse-engineer the coding Waze uses to communicate with users' cellphones by diverting the phone running the app and making it communicate directly with their own computers. The team used the code to write software that could send instructions to the Waze servers, filling the system with virtual "ghost cars," which could be used to create fake traffic jams or monitor real drivers located around the virtual vehicles. Waze issued an update after receiving a warning from the team in 2014, but the researchers say they were still able to track users. "Anyone could be doing this [tracking Waze users] right now," Zhao says. "It's really hard to detect." Waze has an estimated 50 million users worldwide, and Zhao also warns the method could be used on other social networking apps that rely on users sharing information. "It's a massive privacy problem," Zhao says. Waze users who select the option of going "invisible" are not vulnerable to the attack.
Paralyzed By Indecision? Forget Therapy. You Need an Algorithm
Berkeley News (04/26/16) Yasmin Anwar
University of California, Berkeley cognitive scientist Tom Griffiths and computer scientist Brian Christian recently published "Algorithms to Live By: The Computer Science of Human Decision," a book that combines computer science and human intuition to argue a successful algorithm is one that focuses on what matters, minimizes regret, and does not waste time. Although algorithms are typically associated with computers, they have been used by humans for thousands of years to lay out a series of steps to solve a problem or create something. "We wanted to reclaim the notion that humans use algorithms," Griffiths says. The book cites examples of decision-making quandaries faced by a range of famous individuals from throughout human history. One helpful and statistically proven strategy is the "37-percent rule," which is a kind of "optimal stopping" problem. For example, if an individual plans to search for an apartment for a month, the 37-percent rule dictates they should spend 11 days exploring what is out there without making a decision; the searcher then should put down a deposit on the first place that beats what was previously seen. In their research for the book, Griffiths and Christian interviewed nearly 100 experts in various fields, and drew on the research of hundreds more.
How Minecraft Is Helping Children With Autism Make New Friends
New Scientist (04/27/16) Aviva Rutkin
University of California, Irvine Ph.D. student Kate Ringland has studied a version of the online building game Minecraft run by Stuart Duncan, a Web developer in Canada, which was developed to help children with autism learn social skills and make friends. Duncan set up a server in 2013 to run "Autcraft," thinking he would attract 10 to 20 children. However, within the first few days he received hundreds of requests to join, and three years later, the community boasts nearly 7,000 members. Duncan, who ran a popular blog about his own experiences with autism, got the idea after parents with autistic children started telling him their kids were excited about Minecraft, but were being bullied by other players. To join Autcraft, players must fill out an application, and once approved they can roam the landscape and build their own structures, participate in group games, or build things as a team. Some rules must be adhered to, such as not harassing other players or destroying their property. Ringland has spent 60 hours inside the virtual world, watching how the kids play and chat with one another. "This is a great way for them to play a game they love, but also have a social experience," she says. Ringland will present her work in May at the ACM Human Factors in Computing (CHI 2016) conference in San Jose, CA.
BU Researchers Investigate World's Oldest Human Footprints With Software Designed to Decode Crime Scenes
Bournemouth University (United Kingdom) (04/26/16)
Bournemouth University (BU) researchers have developed a software technique to uncover "lost" tracks at the world's oldest human footprint site in Laetoli, Tanzania. The software revealed new information about the shape of the tracks and found hints of a previously undiscovered fourth track-maker at the site. The software was originally developed by BU's Matthew Bennett and Marcin Budka in 2015 for forensic footprint analysis. Their research is focused on developing techniques to enable modern footwear evidence to be captured in three dimensions and digitally analyzed to improve crime scene investigations. The researchers repurposed the software to uncover ancient footprints at Laetoli, which reveal much about the individuals who made them, such as their body mass, height, and walking speed. "The techniques we have been developing for use at modern crime scenes can also reveal something new about these ancient track sites," Bennett says. BU researchers also are developing digital methods for the analysis of modern footprint evidence. "As well as making new discoveries about our early ancestors, we can apply this science to help modern society combat crime," the researchers say.
Internet Video Portals Do Not Control Views Well
Charles III University of Madrid (Spain) (04/25/16)
Researchers at Charles III University of Madrid (UC3M), Imdea Networks, NEC Labs Europe, and Polito have found the majority of video reproduction portals on the Internet, with the exception of YouTube, have unsophisticated systems for controlling fraud in the number of views, and some seem to completely lack such systems. "YouTube has a unique system for detecting fraud that is relatively efficient, but it has some inconsistencies," says Ruben Cuevas, a professor in UC3M's Department of Telematic Engineering. The researchers' method enabled them to play the role of all the different agents involved in the fraud: the attacker, the poster of the video, and the advertiser who pays to put ads in videos posted to YouTube, and on which they carried out a fraudulent attack. "That way, we could have a complete vision of the view count and of how those views were charged to the advertiser," Cuevas says. The researchers used this method to send bots to view two videos 150 times, and found YouTube's public view counter only identified 25 views as real. However, Google's main service for advertisers, Adwords, charged the researchers for 91 of the views carried out by the bots. The researchers hope to create an auditing system that enables the detection of this type of fraud.
Aerial 'Fire Drone' Passes Homestead Test
UNL Today (04/22/16) Craig Chandler
University of Nebraska-Lincoln (UNL) researchers Carrick Detweiler and Sebastian Elbaum have designed an aerial drone to ignite prescribed fires in grasslands and forests, burning 26 acres of restored tallgrass prairie at Homestead National Monument of America. "A tool like this might be one of the answers to making these fires safer," says Homestead National Monument superintendent Mark Engler. "This is an important test." Detweiler and Elbaum are the co-founders of the Nebraska Intelligent Mobile Unmanned Systems Laboratory, and have been working for nearly two years to develop aerial robots small enough to fit in a firefighter's backpack, yet smart enough to navigate a dangerous environment. "UNL is pioneering this merging of two very risky, highly regulated technology fields: fire and unmanned aviation," Elbaum says. The system, tested last week, was the fourth prototype developed by Elbaum and Detweiler. The device works by injecting balls containing a chemical with a liquid, which creates a chemical reaction-based flame after several minutes. The drone also flew over fire lines and at different heights to gather data about fire conditions.
Slate (04/22/16) Jacob Brogan
In an interview, University of California, Berkeley professor Stuart Russell emphasizes the need to ensure artificial intelligence (AI) understands fundamental human values, a task he says is fraught with uncertainty. "What we want is that the machine learns the values it's supposed to be optimizing as it goes along, and explicitly acknowledges its own uncertainty about what those values are," says Russell, recipient in 2005 of the ACM Karl V. Karlstrom Outstanding Educator Award. He notes the addition of uncertainty actually makes the AI safer because it permits itself to be corrected instead of being singled-minded in the pursuit of its goals. "We've tended to assume that when we're dealing with objectives, the human just knows and they put it into the machine and that's it," Russell says. "But I think the important point here is that just isn't true. What the human says is usually related to the true objectives, but is often wrong." Russell says the AI should only act in instances in which it is quite sure it has understood human values well enough to take the right action. "It needs to have enough evidence that it knows that one action is clearly better than some other action," he says. "Before then, its main activity should just be to find out."
Abstract News © Copyright 2016 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.