Welcome to the June 24, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.
Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
At Stanford, Experts Explore Artificial Intelligence's Social Benefits
Stanford News (06/23/16) Tom Abate
This week's Global Entrepreneurship Summit at Stanford University featured a panel discussion on Thursday concerning the social benefits of artificial intelligence (AI). Megan Smith, the U.S. chief technology officer in the White House Office of Science and Technology Policy, notes the government is employing AI and machine learning for various applications, while the bigger challenge is applying "humanity's greatest talent" toward the development and direction of AI by opening up the discussion. Smith says next week the White House should formally announce a way for anyone to register their opinion or view on AI. Meanwhile, panel co-chair and Stanford professor Fei-Fei Li says the future of AI will be shaped by who stands up to computer science. "The future of AI is in the hands of those who make AI," she notes, and the direction the technology takes will partly depend on making computer science more gender-diverse. The panel was co-hosted by the Stanford 100-Year Study on the Future of Artificial Intelligence, whose members discussed their long-term objective of producing detailed reports on subtopics within the broader field of AI. Harvard University professor Barbara Grosz says the executive summary of their first report would concentrate on everyday deployments of AI in urban environments in sectors such as transportation, public safety, and low-income neighborhoods.
Should Your Driverless Car Hit a Pedestrian to Save Your Life?
The New York Times (06/24/16) John Markoff
Most people believe self-driving vehicles should ultimately put their passengers' lives first, according to a new study, a finding that poses an ethical dilemma for developers of autonomous cars who must code moral decisions within a machine. The study involved polling U.S. residents last year concerning how they thought autonomous autos should behave. Although respondents generally felt such cars should make decisions for the greatest good, when presented with scenarios in which they had to choose between saving themselves or saving pedestrians, the respondents chose the former. "One missing component has been the empirical component: what do people actually want?" says Massachusetts Institute of Technology researcher Iyad Rahwan. One of the six polls conducted found respondents generally hesitant to accept government regulation of artificial intelligence algorithms, even if it could address the driver-versus-pedestrian dilemma. The researchers say the study could present a legal and philosophical morass for autonomous-vehicle makers, especially concerning the issue of accountability. "If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm's decisions?" the researchers ask.
National Week of Making!
CCC Blog (06/22/16) Khari Douglas
June 17-23 was the National Week of Making, according to a proclamation by President Barack Obama that called on U.S. citizens to observe the week with programs, ceremonies, and activities that encourage a new generation of makers and manufacturers to share their talents, solutions, and skills. "We must prepare young people for the jobs of the future by equipping them with the analytical skills needed to solve problems and the computer science and hardware development skills required to power our innovation economy," according to a press release announcing the initiative. Obama highlighted the importance of supporting science, technology, engineering, and math education, especially computer science, because it will enable students to unlock their potential and become the inventors of the future. The Computing Community Consortium has been involved in the maker movement since 2014, when it hosted a series of workshops around the Visions 2025 program designed to inspire the computing community to envision future trends and opportunities in computing research. Meanwhile, the West Big Data Hub, one of the four U.S. National Science Foundation Big Data Regional Innovation Hubs, is developing a Data Scientist Map to visualize the growing community of data innovators across the country. In addition, the West Hub is highlighting projects that leverage sensors, citizen science, and the Maker Movement to advance its thematic areas.
Europe Will Spend 1 Billion Euros to Turn Quantum Physics Into Quantum Technology
IEEE Spectrum (06/22/16) Alexander Hellemans
The European Commission in May formally announced a 1-billion-euro commitment to a decade-long megaproject to coordinate and support the research and development of quantum technology. "Europe had two choices: either band together and compete, or forget the whole thing and let others capitalize on research done in Europe," says University of Vienna researcher Anton Zeilinger. The Quantum Technology Flagship is scheduled to begin in 2018, and QuTech scientist Anouschka Verseijen says although the flagship still lacks a clear structure and shape, "the ball has been set rolling." Technologies the megaproject will cover include quantum simulation, in which quantum computers would execute quantum mechanics-level materials modeling. Quantum sensors and quantum imaging would hasten medical advancements, while quantum clocks could find use in accurately measuring local gravity potential and the precise timing of financial transactions, says Helen Margolis with Britain's National Physical Laboratory. Meanwhile, new quantum algorithms could enable higher data-processing speeds for quantum computers. Loughborough University's Mark Everitt says communication between scientists and engineers is essential to the project's success, as many areas of quantum mechanics have evolved from physical challenges to engineering challenges. "For these areas, we will see great progress that will lead to new products," he says.
How Well Do Facial Recognition Algorithms Cope With a Million Strangers?
UW Today (06/23/16) Jennifer Langston
Assessing the performance of face-recognition algorithms at the million-person scale is the purpose of the MegaFace Challenge hosted by University of Washington (UW) researchers. "We need to test facial recognition on a planetary scale to enable practical applications--testing on a larger scale lets you discover the flaws and successes of recognition algorithms," says UW professor Ira Kemelmacher-Shlizerman. "We can't just test it on a very small scale and say it works perfectly." The researchers initially compiled a dataset of 1 million publicly available Flickr images, representing 690,572 unique individuals. Facial-recognition teams then were challenged to download the database and see how their algorithms fared in differentiating between 1 million possible matches. The algorithms were evaluated on verifying whether two photos were of the same person, and identifying matches to the photo of one individual to a different picture of the same person among 1 million "distractors." Google's FaceNet exhibited the strongest performance on one test, slipping from near-perfect accuracy when dealing with a smaller number of images to 75-percent accuracy on the million-person test; other algorithms that did well at a small scale declined by much larger percentages when performing the tougher task, to as low as 33-percent accuracy.
Google Researchers Explore Ways to Ensure Safety of Future AI Systems
eWeek (06/22/16) Jaikumar Vijayan
Researchers at Stanford University, the University of California, Berkeley, OpenAI, and Google have released a technical study devoted to addressing artificial intelligence (AI) safety risks. The study outlines five basic problems that are relatively minor today but could become much more important as machines get smarter in the future. The study examines practical approaches to solving these problems and ensuring AI systems are designed to operate in a reliable and safe manner. "We believe it's essential to ground concerns in real machine-learning research and to start developing practical approaches for engineering AI systems," says Google researcher Chris Olah. He notes advancing AI means making AI systems smarter and safer, which involves ensuring they do what people actually want them to do. One of the five problems focuses on developing a way to ensure an AI system will not impact its environment negatively when performing its functions. Other problems involve finding ways to ensure robots do not engage in activities with negative consequences. "Many of the problems are not new, but the paper explores them in the context of cutting-edge systems," says OpenAI researchers Paul Christiano and Greg Brockman.
Augmented Eternity: Scientists Aim to Let Us Speak From Beyond the Grave
The Guardian (06/23/16) Dan Tynan
Augmented eternity, the posthumous preservation of a person's knowledge, beliefs, and personality, could be feasible within 15 to 25 years, according to researchers from the Massachusetts Institute of Technology's Media Lab and Ryerson University. Ryerson's Hossein Rahnama says the same machine-learning systems used by Google and Netflix to make predictions based on patterns could be used to create algorithms that would come up with an approximation of how a deceased individual might respond to a question or statement. "My ultimate goal is to bridge the gap between life and death by eternalizing our digital identity," Rahnama says. "Your physical being may die, but your digital being will continue to evolve with the purpose of helping people and maintaining your legacy as an evolving being." The artificial intelligence (AI) system would require vast amounts of highly personal data curated from an individual's digital footprint, and Rahnama says privacy would be a major concern. Other AI experts are skeptical of augmented reality and its possible uses. "The way I understand AI, machine learning, and big data is that it works well at distilling large amounts of data into the most common, repeating patterns," says Catalyst researcher Jeremy Pickens. "And I don't see the human experience as particularly reducible. Are we really just a sum of repeating patterns?"
Computer Watches Human Camera Operators to Improve Automated Sports Broadcasts
EurekAlert (06/21/16) Jennifer Liu
Researchers at the California Institute of Technology, the University of British Columbia, and Disney Research have developed an automated camera system that was able to learn how to better film basketball and soccer games by watching human camera operators. Disney researcher Peter Carr says the system has produced footage without much of the jerkiness that plagues other automated camera systems. "Having smooth camera work is critical for creating an enjoyable sports broadcast," Carr says. "The framing doesn't have to be perfect, but the motion has to be smooth and purposeful." The researchers developed new machine-learning algorithms to ensure automated cameras could strike the right balance between smoothness and closely following the action. The new approach repeats multiple times, and learns by analyzing the deviations it makes from the human operator at each iteration. "This research demonstrates a significant advance in the use of imitation learning to improve camera planning and control during game conditions," says Disney Research's Jessica Hodgins. The system was more successful in basketball games than in soccer games, because soccer players tend to hold their formation and their movements, providing less information about the ball and where the camera should look.
Making Computers Reason and Learn by Analogy
Northwestern University Newscenter (06/21/16) Amanda Morris
Northwestern University researchers have developed the structure-mapping engine (SME), a model that could give computers the ability to reason more like humans and even make moral decisions. SME is capable of analogical problem-solving, including mimicking the way humans spontaneously use analogies between situations to solve moral dilemmas. The new model is based on psychologist Dedre Gentner's structure-mapping theory of analogy and similarity, which has been used to explain and predict many psychological phenomena. Structure mapping argues analogy and similarity involve comparisons between relational representations, which connect entities and ideas. "Given a new situation, the machine will try to retrieve one of its prior stories, looking for analogous sacred values, and decide accordingly," says Northwestern professor Ken Forbus. To encourage more research in this field, the Northwestern team is releasing the SME source code and a 5,000-example corpus, which includes comparisons taken from visual problem-solving, textbook problem-solving, and moral decision-making. "SME is already being used in educational software, providing feedback to students by comparing their work with a teacher's solution," Forbus says. He notes there is vast untapped potential for building software tutors that use analogy to help students learn.
No Place for the Old? Is Software Development a Young Person's Game?
TechRepublic (06/22/16) Nick Heath
Statistics and anecdotal evidence suggest software developers tend to be under 30, with University of California at Davis professor Norman Matloff noting coders face mounting rejections at the age of 35 to 40, either because they are over- or under-qualified, although the more likely reason is seasoned developers are unaffordable for employers. Meanwhile, a Stack Overflow survey this week estimated London developers have been coding for an average of less than eight years, while average global coding experience is even lower. However, there is not universal agreement that older coders are abandoning the profession. For example, RedMonk analyst Fintan Ryan notes the surveys are biased in favor of younger coders who use the Stack Overflow site. "There are significant groups of older developers in areas such as finance that are not inclined to respond to surveys likes this," he says. Meanwhile, Stack Overflow's Natalia Radcliffe-Brine contends the numbers signify a record influx of younger people entering the profession. "The proportion [of younger to older] is changing, so instead of having lots of older people in the industry, you have so many more young people coming into it now," she says. "That's why the age looks a lot younger. I definitely don't think it's that the older developers aren't there."
RedEye Could Let Your Phone See 24-7
Rice News (06/20/16) Jake Boyd
Researchers from Rice University's Efficient Computing Group on Monday unveiled RedEye, an application that could provide computers with continuous vision, at the ACM/IEEE International Symposium on Computer Architecture (ISCA 2016) conference in Seoul, South Korea. RedEye is viewed as a first step toward enabling devices to see what their owners see and keep track of what they need to remember. "The concept is to allow our computers to assist us by showing them what we see throughout the day," says Rice professor Lin Zhong. "It would be like having a personal assistant who can remember someone you met, where you met them, what they told you and other specific information like prices, dates, and times." RedEye uses convolutional neural networks to perform object recognition, using techniques from machine learning, system architecture, and circuit design. The researchers note the combination of the techniques with analog-domain processing enables RedEye to recognize objects without actually looking at the image itself. "This increases energy efficiency because we can choose to digitize only the images that are worth expending energy to create," says former Rice graduate student Robert LiKamWa. Moreover, the researchers say this helps with privacy as a set of rules could be defined to automatically discard the raw image after it has finished processing and the image would never be recoverable.
Robotic Motion Planning in Real Time
Duke University News (06/20/16) Ken Kingery
Duke University researchers have designed a new computer processor specifically for robotic motion planning. The researchers say the computer processor can plan up to 10,000 times faster than existing approaches while consuming a fraction of the power. Designed to perform collision detection, the most time-consuming aspect of motion planning, the processor performs thousands of collision checks in parallel. The technology works by breaking down the operating space of a robot arm into thousands of three-dimensional volumes called voxels, determining whether an object is present in one of the voxels contained within pre-programmed motion paths, and stitching together the shortest motion path possible. The processor can find a plan in less than a millisecond while consuming less than 10 watts of electricity. The team says it is fast enough to plan and operate in real time, and power-efficient enough to be used in large-scale manufacturing environments with thousands of robots. The processor "could be used as a component of a more complex planning algorithm, perhaps one that sequences several simpler motions or plans ahead to reason about the movement of several objects," says Duke professor George Konidaris.
How Google DeepMind's Ant Soccer Skills Can Help Improve Your Search Results
ZDNet (06/20/16) Liam Tung
Google's DeepMind artificial intelligence (AI) is learning to navigate thee-dimensional (3D) environments and puzzle-solving games, including a soccer game in which the AI plays as a virtual ant, according to DeepMind's David Silver. He says the AI controls the ant's four-legged movement to chase down the virtual soccer ball, dribble, and score a goal; previously, the AI mastered two-dimensional Atari games. The ant soccer challenge was solved via reinforcement learning and DeepMind's Deep-Q Network algorithm, which stores a bot's experiences and estimates actionable rewards. DeepMind also developed an asynchronous actor-critic algorithm, called A3C, to help the AI learn how to play soccer without prior experiences. Silver says A3C uses standard multicore central-processing units instead of graphics processing unit-based algorithms to solve motor-control challenges and 3D navigation using visual input in a fraction of the training time. He also notes DeepMind is testing its AI with Labyrinth, a 3D maze and puzzle game using only visual cues. DeepMind also has created Gorila, a reinforcement learning system with quick training times, which has been applied to Google recommender systems, according to Silver.
Abstract News © Copyright 2016 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.