Welcome to the March 23, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.
Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
Andrew S. Grove, Intel Chief Who Spurred Semiconductor Revolution, Dies at 79
The New York Times (03/22/16) Jonathan Kandell; John Markoff; Steve Lohr
Former Intel CEO and chairman Andrew S. Grove, who is credited by many with bringing about the semiconductor revolution, died Monday at 79. Time Magazine chose Grove as Man of the Year in 1997 for being "the person most responsible for the amazing growth in the power and the innovative potential of microchips." Harvard Business School professor David B. Yoffie says Grove is considered in some ways to be the father of Silicon Valley, where his managerial model of "creative confrontation" became the template for generations of Silicon Valley executives and entrepreneurs. As a part of Fairchild Semiconductor in the early 1960s, Grove sought to embed transistors on silicon wafers, and the resulting chips vastly reduced the cost and broadened the scope of computers. His team also cut the instability of transistors via removal of sodium impurities from the chips. Grove's leadership of Intel saw its transition from memory chip production to microprocessor manufacturing, and by the early 1980s Intel microprocessors powered more than 80 percent of personal computers.
Human Eyes Assist Drones, Teach Machines to See
Ecole Polytechnique Federale de Lausanne (03/21/16) Jan Overney
Researchers have developed a new strategy combining crowdsourcing and machine learning for rapidly interpreting aerial images captured by camera drones. As a test of their approach, the researchers surveyed Namibia's Kuzikus wildlife reserve to count the resident animal population, and they used the www.micromappers.org crowdsourcing platform to upload the gathered drone images for manual analysis by volunteers. The volunteers were tasked with clicking through a stack of images, identifying all animals, and outlining them on their screens. "Within two days, they had evaluated 98 percent of the 26,000 images that had been uploaded," says Swiss Federal Institute of Technology researcher Stephane Joost. Fifty percent of these annotated images were then utilized to train an automatic object-recognition algorithm, which was then tested on the remaining pictures. "The 500 digital volunteers did generate a number of false positives, tracing features that in actual fact were not animals," Joost observes. "Despite that, their analysis was certainly good enough to serve as training data for the computer algorithm." Joost says the new approach could expedite image data analysis to aid disaster response operations.
Stanford's Flying, Perching SCAMP Robot Can Climb Straight Up Walls
IEEE Spectrum (03/16/16) Morgan Pope
A robot developed at Stanford University is the first to combine flying, perching with passive attachment technology, and climbing. The team built on the Biomimetics and Dexterous Manipulation Lab's research in perching and climbing to develop the Stanford Climbing and Aerial Maneuvering Platform (SCAMP), a robot that is capable of multi-modal operation in unstructured outdoor environments. SCAMP also can recover from climbing failures and take off when it is ready to fly again. SCAMP does all this outdoors using only onboard sensing and computation. The robot's climbing mechanism uses one high torque-density servo to drive long steps up a wall, and an even smaller servo to actuate motion toward and away from a wall. The team placed the climbing mechanism on top of a quadrotor, and combined this with a long tail that acts as a pivot point to create a system that is able to push itself onto a wall using aerodynamic forces.
Envisioning Supercomputers of the Future
National Science Foundation (03/17/16)
A key goal of President Obama's National Strategic Computing Initiative is to expedite research and development into future exascale computing systems, and the Argo Project funded by the U.S. Department of Energy has enlisted 40 researchers to devise a new approach for extreme-scale system software. The National Science Foundation (NSF)-supported Chameleon environment for large-scale cloud computing research serves as the testbed for Argo Project concepts. The reconfigurable platform enables the research community to experiment with unique cloud computing architectures and explore new, architecturally facilitated cloud computing applications. "To design new and innovative compute clouds and the applications they will run, academic researchers need much greater control, diversity, and visibility into the hardware and software infrastructure than is available with commercial cloud systems today," notes NSF's Jack Brassil. Among the core aspects of future exascale systems undergoing testing with Chameleon is the Global Operating System, which manages machine configuration, resource allocation, and launching apps; the Linux-based Node Operating System, which provides interfaces for improved control of future exascale architectures; the concurrency runtime Argobots infrastructure that efficiently circulates work among computing resources; and the Backplane for Event and Control Notification, a framework that collects system performance data and sends it to controllers to take appropriate action. Chameleon enables top-to-bottom system modification and control to support a wide range of cloud research and architectures.
Machines That Will Think and Feel
The Wall Street Journal (03/18/16) David Gelernter
Yale University professor David Gelernter argues the nascent stage of artificial intelligence (AI) should serve as a warning, because rapid AI evolution without cautious monitoring and consideration of what can potentially happen if machines acquire superhuman intelligence and capabilities could ultimately herald the end of humanity. Among the flaws Gelernter sees in the current approach to AI development is a lack of focus on how emotion relates to rational thought and the human mind at large. He writes although true feeling and consciousness cannot be replicated in machines, this does not necessarily represent a barrier to AI. Gelernter envisions a continuous spectrum that the mind moves down each day, from instances of pure thinking to instances of pure being, with emotions gaining power on the downward end. He says this spectrum needs to be thoroughly comprehended if researchers are to understand the mind, and AI has to reproduce it in its entirety to attain human-like intelligence. "Once AI has decided to notice and accept this spectrum--this basic fact about the mind--we will be able to reproduce it in software," Gelernter says. He also emphasizes "the more we learn, the more carefully, critically, and intelligently we can observe the dangerous doings of AI."
What's the Year, Make, and Model of Your Vehicular Cloud?
IEEE Spectrum (03/17/16) Willie D. Jones
Old Dominion University (ODU) engineers want to use Internet-connected cars as a cloud computing resource. Cars' powerful on-board computers, ample storage, and reliable wireless Internet connectivity would work as part of ad-hoc data centers that tackle computing jobs. The vehicular cloud would be set up in a parking lot that can accommodate thousands of cars, each operating as a virtual machine. A mechanism would automatically identify an available car the instant it enters the parking lot and alert the other car or cars with which it is sharing a task when it is about to exit. The ODU team describes schemes for assigning computing jobs to cars and also characterizes how likely it is that a pending number-crunching job would have to be restarted from scratch because drivers have returned and need their cars for transportation. The researchers report in simulations, the mean time to failure for two-car work groups never exceeded 250 hours. Short jobs that are not too memory- or network-intensive would be good for two cars, according to former ODU postdoctoral student Puya Ghazizadeh.
Tool Chain for Real-Time Programming
Karlsruhe Institute of Technology (03/21/16) Monika Landgraf
Research and industry partners are developing a tool chain for efficient, standardized, and real-time programming under the European Union's ARGO consortium coordinated by the Karlsruhe Institute of Technology (KIT). The tool chain's development is based on the open source Scilab software. "Two of the most important requirements of future applications are an increased performance in real time and further reduction of costs without adversely affecting functional safety," notes KIT professor Jurgen Becker. "For this, multi-core processors have to make available the required performance spectrum at minimum energy consumption in an automated and efficiently programmed manner." The ARGO project seeks to enable programming via automatic parallelization of model-based applications and code generation. "Even without precise knowledge of the complex parallel processor hardware, the programmers can control the process of automatic parallelization in accordance with the requirements," Becker says. "This results in a significant improvement of performance and a reduction of costs." The ARGO tool chain's future uses include managing the complexity of parallelization and adaptation to the target hardware in a mainly automated manner at reduced expense.
Teaching Computers to Be More Creative Than Humans
NYU Polytechnic School of Engineering (03/17/16)
The research of New York University Tandon School of Engineering professor Julian Togelius is focused on the convergence of games and artificial intelligence (AI), in support of his conviction that an artificially intelligent operating system could exhibit greater originality than a human game designer. "I'm teaching computers to be more creative than humans," he says. Togelius concentrates on procedural content generation (PCG), in which game content is produced via algorithms instead of direct user input. He uses evolutionary algorithms for such tasks, and these programs can work in parallel with other algorithms that identify the player's skill and preferences to change the game on the fly. Togelius also has demonstrated PCG can generate wholly new games from scratch, and he envisions more affordable and more creative game development by several orders of magnitude as a result of such innovation. Togelius is convinced games offer fair and reliable benchmarks for AI systems under development, and he thinks games also provide a more beneficial AI testbed than robotics. "Togelius' work is now leading to algorithms capable of better-than-human decision making--a result with implications for games and beyond," notes fellow Tandon professor Andy Nealen.
Scientists on Verge of Developing Emotional Computer
Sputnik (03/18/16) Yulia Osipova
In an interview, George Mason University professor Alexei Samsonovich details how researchers from the National Research Nuclear University Moscow Engineering Physics Institute are planning to develop a computer agent called Virtual Actor imbued with both narrative and emotional intellect over the next 18 months. "It will understand the context of what is going on, as well as the unfolding scenarios" so it can make plans and establish targets, he says. Samsonovich says the agent can function as a virtual robot that fulfills the role of a specific person. "Our principal goal is to formulate the basic principles that natural intelligence in the human brain is built upon," he notes. Samsonovich describes the approach he supports as being simultaneously bottom-up and top-down, requiring combined functional, neural, symbolic, and logical strategies. He says a human-thought equivalent for artificial intelligence is necessary if computers are to work in an assistive capacity with humans. "It is a kind of singularity, where all functionalities converge at one point, and this gives you a complete set of options," Samsonovich reports. He says another focus of his team's work relates to the registration of human brain activity to understand what a person thinks, his visual perception, and what kind of emotions he feels.
Google's Go Victory Shows AI Thinking Can Be Unpredictable, and That's a Concern
The Conversation (03/17/16) Jonathan Tapson
The recent triumph of Google's AlphaGo artificial intelligence (AI) in a tournament against a Go grandmaster revives the concerns of AI and deep learning's unpredictable thinking, which constitutes a threat for several reasons, according to Western Sydney University's Jonathan Tapson. He notes AI is frequently trained via a mix of logic and heuristics, and reinforcement learning. The latter involves the AI performing a task repeatedly, modifying its approach each time to learn the best course of action to follow. "The problem is the AI will explore the entire space of possible moves and strategies in a way humans never would, and we have no insight into the methods it will derive from that exploration," Tapson notes. He says our inability to imagine AIs' probable behavior makes it impossible to anticipate or manage their worst-case behavior. Imbuing AIs with ethics and morality also is difficult because such concepts cannot be reduced to heuristics or rules. "We need to understand and internalize that no matter how well they imitate or outperform humans, they will never have...intrinsic empathy or morality," Tapson argues. The combination of AIs' lack of empathetic or moral constraints and their capability for unforeseeable actions creates a serious quandary concerning in what capacities they can and should be used.
A Critical Time in High Performance Computing
In an interview, Indiana University professor Thomas Sterling, associate director of the Center for Research in Extreme Scale Technologies (CREST), discusses the facility's focus. He describes CREST as a research center concentrating on high-performance computing and not a supercomputing facility. "CREST is a context and an environment that allows a full-featured set of skills to work together towards common goals," Sterling says. "The purpose of the center is to facilitate academic research." Sterling reports the open source nature of many CREST projects is very important, and he cites the HPX-5 runtime system as vital for serving on multiple-funded projects, either directly to advance the state of runtime systems, or indirectly in support of different types of applications. Sterling also points to CREST's ParalleX execution model as an important component, which "reflects some prior art...reflects some unique contributions in synergy, and it provides a foundation of the development of our HPX-5 runtime systems as well as guiding possible parallel architecture advances and may inform possible [application programming interfaces]." Sterling says ParalleX enables experimentation with abstraction to introduce alternative execution methods. He also notes CREST's desire to participate in OpenHPC is borne out of its focus "on determining how to make dynamic adaptive execution work for architectures and operating systems and runtime systems and programming interfaces."
The Most Fascinating Work Facebook Is Doing in Machine Learning
The Huffington Post (03/15/16)
Joaquin Quinonero Candela, Facebook's director of applied machine learning (ML), reports his group focuses on core ML, computer vision, and language technologies. The core ML team is committed to researching and shipping large-scale and real-time ML/artificial intelligence algorithms for some of the largest ML applications in the world, Candela notes. "Whenever a user logs into Facebook, these models are used to rank news feed stories (1B users every day, 1.5K stories per user per day on average), ads, search results (1B+ queries a day), trending news, friend recommendations and even rank notifications that a user receives, or rank the comments on a post," he says. Candela also says the Core ML researchers build state-of-the-art text understanding algorithms via deep learning. Meanwhile, he says the computer-vision team uses a system to process all images and videos uploaded to Facebook, which exceed 1 billion items daily. Using deep convolutional networks with billions of parameters, "we predict the content of an image for example in order to generate captions for the blind, or to automatically detect and take down offensive content, improve media search results, [or] automate visual captcha among many other use cases," according to Candela. Facebook's language technology effort seeks to remove language barriers by translating more than 2 billion daily posts in more than 40 distinct languages, and Candela says deep learning is being explored to enhance translation.
Paying Attention to Words, Not Just Images, Leads to Better Captions
University of Rochester NewsCenter (03/15/16) Leonor Sierra
A team of researchers from the University of Rochester and Adobe is leading the Microsoft COCO Image Captioning Challenge, an international image-captioning competition run by Microsoft. The team's "Attention" system has been leading the year-long competition since last November. The key to the team's approach is thinking about words--what they mean and how they fit in a sentence structure--just as much as thinking about the image itself. The Attention system focuses on "semantic attention," which Rochester professor Jiebo Luo and colleagues define as "the ability to provide a detailed, coherent description of semantically important objects that are needed exactly when they are needed." Luo says in order to describe an image, you must decide what to pay more attention to, including the use of specific words. Computer image captioning brings together computer-vision and natural language processing. The researchers trained their system on a massive dataset of images, so it learned to identify objects in images. For the algorithm that Luo and his team used in their system, they also trained their system on many texts. Their goal was not only to understand sentence structure but also the meanings of individual words, which words often get used together with these words, and which words might be semantically more important.
Abstract News © Copyright 2016 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: email@example.com
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.