Association for Computing Machinery
Welcome to the January 27, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Marvin Minsky, Pioneer in Artificial Intelligence, Dies at 88
The New York Times (01/25/16) Glenn Rifkin; John Markoff

Artificial intelligence pioneer Marvin Minsky, whose work helped inspire the development of the personal computer and the Internet, died on Sunday at the age of 88. As a professor at the Massachusetts Institute of Technology (MIT), Minsky demonstrated the potential of imparting common-sense reasoning to computers. He co-founded what eventually became MIT's Computer Science and Artificial Intelligence Laboratory, which would profoundly impact the modern computing industry. Among the concepts owing their genesis to the lab and Minsky's work was the idea that digital information should be shared freely, which would influence the open source software movement as well as the Internet's predecessors. Among Minsky's scientific milestones was his design and construction of some of the first visual scanners and mechanical hands with tactile sensors, which helped shape modern robotics. Minsky, who received the ACM A.M. Turing Award in 1969, also built the first randomly wired neural network learning machine in 1951. "Marvin was one of the very few people in computing whose visions and perspectives liberated the computer from being a glorified adding machine to start to realize its destiny as one of the most powerful amplifiers for human endeavors in history," says computer scientist Alan Kay. Minsky said his 1985 book, "The Society of Mind," proposed "that intelligence is not the product of any singular mechanism but comes from the managed interaction of a diverse variety of resourceful agents."
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


If Killer Robots Arrive, the Terminator Will Be the Least of Our Problems
The Washington Post (01/25/16) Matt McFarland

Experts warned of the threat of autonomous weaponry at last week's World Economic Forum in Davos, Switzerland. "Being attacked by an army of Terminators is a piece of cake when compared to being attacked by this kind of weapon,” said University of California, Berkeley professor Stuart Russell. "We're [talking] about systems that weigh less than an ounce, that can fly faster than a person can run, can blow holes in their heads with one gram of shape-charge explosive, and can be launched in the millions." Russell and the Vienna Center for Disarmament and Non-Proliferation's Angela Kane said time is of the essence in preventing a global arms race toward autonomous weapons, with no more than two years given to address the issue. The concern is such weapons could become affordable and easily obtained, and Russell and Kane urged scientists, governments, and the artificial intelligence industry to convene quickly to determine what preventive steps can be taken. Russell's Future of Life Institute is hoping a ban on autonomous weapons will stop the arms race, but Kane warned of a "glacial pace of international negotiations." Meanwhile, BAE Systems chairman Roger Carr noted many politicians do not have a full understanding of how close autonomous weaponry is to becoming a reality.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Next Big Test for AI: Making Sense of the World
Technology Review (01/26/16) Will Knight

A new image database called Visual Genome could push computers toward the goal of learning to make sense of what is happening in photographs. The image database could help gauge the progress of computers attempting to better understand the real world. Fei-Fei Li, who directs Stanford University's Artificial Intelligence Lab, developed Visual Genome with several colleagues. The images in Visual Genome are tagged more richly than in ImageNet, a database previously developed by Li and colleagues that contains more than 1 million images tagged according to their content. The work of teaching computers to parse visual scenes is fundamentally important for artificial intelligence, according to Li. Algorithms trained using examples in Visual Genome could potentially enable robots or self-driving cars to understand a scene properly. Li says the AI algorithms also could be used to teach computers how to communicate more effectively or to have more common sense. "We are focusing very much on some of the hardest questions in computer vision, which is really bridging perception to cognition," she says. "Not just taking pixel data in and trying to make sense of its color, shading, those sorts of things, but really turn that into a fuller understanding of the [three-dimensional] as well as the semantic visual world."


Scenic Sat-Nav Will Take You on the Prettiest Route
New Scientist (01/25/16) Anna Nowogrodzki

Researchers at the University of Bremen and Hasselt University have developed Autobahn, an artificial intelligence (AI) system that can use Google Street View images to optimize a driving route for a particular type of scenery. The researchers divided the landscape into squares with 1-kilometer sides and downloaded the Google Street View panorama from a spot on the largest road within each square, and a neural network then classified the Google Street View's scenery as forest, mountain, fields, water, a sight, or non-scenic. To plan a route, the user provides a start and end point, a maximum travel time, and a scenery preference. For example, if the user chooses mountains, the program chooses the route with the most mountain views. The researchers tested Autobahn on the Spanish island of Majorca, the Rhone-Alps in France, and Santa Barbara in California. The researchers conducted online surveys using images from journeys in these three settings. The Autobahn-generated route was compared to the fastest route and 24 volunteers were asked to say which route they would prefer to take; most of the volunteers said they preferred the Autobahn route. The system is fast, effective, works with smartphones, and is scalable to large areas, according to University of Florida researcher Hartwig Hochmair.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


What Should We Be Teaching the Next Generation of Computer Scientists?
Times Higher Education (01/21/16) John Gilbey

The key to educating next-generation computer scientists may lie at institutions such as Stanford University, where professor Alex Aiken cites a pressing need for computer scientists to be cognizant of the ramifications of their decisions. He says although more and more activities are being automated by technology, the demand for designers and builders of such systems is limitless. One area where computer science expertise is needed is data management/mining, and the accompanying infrastructure to support it. The incorporation of computer science into other fields suggests a need for computer scientists to cultivate interpersonal, teamworking, and management skills, and there is pressure on university courses to provide them. The Institute for the Future's Mike Leibhold is urging universities to adopt a "futures perspective" emphasizing human factors to ensure graduates stay "above the application programming interface," and are not outmoded by rapidly advancing technology. He says graduates need to be trained to think five to 10 years out so they are ahead of the technological curve, and to be educated to think non-linearly. Graduates also will need to refresh their knowledge on an ongoing basis, and evaluate the direction of both the industry and society. Moreover, they must be aware of the increasing intimacy between the end user and technology.


Wanted: Agile Robots
Federal Computer Week (01/25/16) Sean Lyngaas

The U.S. National Institute of Standards and Technology (NIST) is planning the Agile Robotics for Industrial Automation Competition, a national contest to make robots more agile, with the goal of removing "a major obstacle" to manufacturers adopting robotics. Programming a robot to integrate it into a manufacturing operation can make up 45 percent to 60 percent of the cost of deploying a robot, which holds back the development of the technology as a whole. Making a robot more capable of adapting to changing manufacturing demands will mean making advances in detecting failure in a manufacturing process, automated planning to cut down on programming time, working on manufactured parts whose location is not predefined, and "plug-and-play" interoperability. "We want to make sure that the challenges in this competition are truly representative of those facing industry," says Craig Schlenoff, head of NIST's Cognition and Collaboration Systems Group. Participants will test their solutions in a computer model of a real manufacturing operation, helping to develop metrics for measuring robotic agility. The U.S. has 152 robots per 100,000 manufacturing employees, compared to 437 in South Korea, 323 in Japan, 282 in Germany, and 30 in China, according to International Federation of Robotics data.


MIT Research Project Aims to Corral, Enrich Live Streaming Video
Government Technology (01/22/16) Colin Wood

The Deepstream project at the Massachusetts Institute of Technology's Center for Civic Media at the Media Lab seeks to enhance streaming video by imparting contextual information while also functioning as a central platform for finding live streams around specific events and topics. With Deepstream, broadcasters can cluster multiple live streams together and overlay data, communicating context so users can locate related content. "I think there's great potential [for Deepstream in city government], because you can have people essentially tuning into that live stream from home or wherever they are and not just jumping into it blind and having to listen to what's going on, but having this extra layer of information that says, 'Here's the agenda for tonight, we're going to be talking about parking, here's a link to the map with the current parking plan for the city,'" says Deepstream project lead Gordon Mangum. Deepstream aims to address the lack of a central site to reliably search for live streams surrounding a particular topic or event by serving as a search aggregator for other live-streaming services. Mangum notes in some instances, Deepstream can find relevant content better than the original service's search function can.


Who Needs a Computer Science Degree These Days?
CIO (01/22/16) Paul Rubens

An increasing number of software developers are entering the job market lacking degree-level training, and these self-learners tend to gravitate toward newer and in-demand languages such as HTML5, JavaScript, and Apple's Swift, according to a VisionMobile survey. In contrast, coders of older languages such as C# and Java are more likely to have had formal instruction. The poll also found Massively Open Online Courses (MOOCs) play a key role in helping aspiring developers develop skills in Swift and other languages such as Python and Ruby. In addition, MOOCs offer courses in iOS and Android app development, Web development, and data science. Many developers who have studied a language via a MOOC already possess a bachelor's degree, and many were already software programmers. "The typical Coursera learner taking a programming or other technology course has a bachelor's degree, is currently employed, and is between 22 and 35 years of age," notes Coursera's Kevin Mills. "Among these learners, it is about an even split between those looking to begin a new career in programming versus those seeking to advance their existing programming skills." The use of MOOCs and other alternative forms of developer training is growing in popularity because many people cannot afford either the time or money it takes to obtain a computer science degree, especially if they already have a bachelor's degree.


Computer-Based Test Aims to Predict Dementia Risk
U.S. News & World Report (01/21/16) Dennis Thompson

University College London (UCL) researchers have developed a computer-based test that might be able to predict a person's risk for dementia by analyzing the information family doctors gather during routine visits. The algorithm assesses factors such as age, sex, social interaction, smoking, body-mass index, alcohol use, high blood pressure, diabetes, stroke, irregular heartbeat, aspirin use, and depression, according to the researchers. "We chose the particular factors as other research has shown in some people that they [risk factors] can be linked to an increased risk of dementia," says UCL's Kate Walters. The algorithm, which generates a Dementia Risk Score, proved accurate when researchers used it to assess the records of more than 226,00 patients ages 60 to 79. However, the algorithm was not accurate for judging dementia risk at age 80 or beyond, because by that age the risk of dementia is elevated across the board. The researchers say the test could provide advance warning to people whose lifestyle choices are increasing their risk. The program would need to be tested on an American population before it could be adopted in the U.S. "We are making all our computer algorithms freely available to other researchers to do this, and the U.S. does have some similar large healthcare databases where it could be tested," Walters says.


Tor Project Raises Over $200,000 in Crowdfunded Support
ZDNet (01/25/16) Charlie Osborne

The Tor Project recently announced more than $205,000 has been raised by 5,265 supporters in the last few months by the company's first crowdfunding campaign. The campaign encouraged people to donate in order to keep the network strong and stable, and so the network can invest in educational projects concerning surveillance and privacy. The network, designed to make browsing more secure and free of surveillance, is run by volunteers who supply computer nodes and relays to direct traffic in a way that disguises original Internet Protocol addresses and locations. The project currently supports about 20 contractors and staff members, as well as thousands of volunteers, who keep the network running and ensure Tor is as secure as possible. The funding will be used to develop new privacy tools, pay for support help desks, and improve the stability and security of existing services. "Even though we're a privacy organization, we found out what a Tor supporter looks like," says the Electronic Frontier Foundation. "It's someone who takes action to support their right to privacy."


Wee ARCHIE: Digital Dinos Put Bite-Sized Supercomputer Through Its Paces
University of Edinburgh (01/19/16) Tracy Peet

University of Edinburgh researchers have developed Wee ARCHIE, a mini supercomputer that powers virtual dinosaur races. Wee ARCHIE replicates in miniature high-performance computing techniques to simulate races between on-screen Argentinosaurus. Wee ARCHIE and its larger namesake, the ARCHER supercomputer, use parallel-computing systems that enable many calculations to be completed instantaneously on different microprocessors. The portable Wee ARCHIE displays the types of hardware found inside the world's most powerful supercomputers. "We will use Wee ARCHIE to engage schoolchildren with supercomputing, and show them the huge benefits that the technology can bring to scientific research," says University of Edinburgh researcher Lorna Smith. The miniature supercomputer contains 18 credit card-sized processors housed inside a custom-made case. Light-emitting diode displays on each of the processors light up when they are in use, showing how multiple parts of a parallel-computing system work together to perform complex tasks. The program lets users change the structure of dinosaurs' joints and muscles, modifying their ability to run. Wee ARCHIE analyzes each of the configurations quickly, and presents the results as an on-screen score.


Mechanical Quanta See the Light
University of Vienna (Austria) (01/19/16) Alexandra Frey

Researchers from the University of Vienna and TU Delft say they have taken a first step toward a universal quantum connection based on a nanomechanical device's quantum-mechanical vibrations. A tiny silicon beam patterned with holes vibrates back and forth in conjunction with a laser to convert quantum particles of light into quantum vibrations, and later back into light, enabling the linkage of quantum devices to a future quantum Internet. Whenever the device first transformed a photon to a phonon (quantum-mechanical vibration), it generated a "signaling" photon. By first seeking this signaling photon, the researchers knew precisely when their device had made a successful conversion. They then had their device convert its phonon back into light via a laser, emitting a photon. A careful tally of the signaling photons and the emitted photons revealed the conversion process happened at the quantum level, one particle at a time. "Our measurements are clear evidence that mechanical vibrations also behave like particles," says lead study author Ralf Riedinger. "They are genuine quantum particles of motion. It's wave-particle duality, but with a nano-sized tuning fork."


There May Be a Way to Allow Mass Surveillance and Preserve Our Privacy at the Same Time
Quartz (01/16/16) Akshat Rathi

Mass surveillance without privacy infringement on law-abiding citizens could be feasible thanks to an algorithm developed by the University of Pennsylvania's Michael Kearns. For scenarios in which only a few connections exist between people or organizations under suspicion or investigation, the algorithm alerts investigators that taking action could lead to a violation of privacy for selected people. But in the event many people could serve as connections, the algorithm would permit the government to go forward and target and expose the suspect organization. In this situation, the chances of compromising select individuals' privacy are lower. Kearns' algorithm also would inject noise into social networks to protect the privacy of people innocently linked to targets, without obstructing investigators' attempts to identify potential malfeasance. Kearns says this would involve "mild, but important, departures from commonly used approaches." Kearns and his colleagues tested the algorithm's performance on large datasets with the task of identifying random subgroups of actors listed in the Internet Movie Database and academic authors in a database of scientific papers, beginning with random members of these groups and working through observable connections. The algorithm designed to protect the privacy of individuals only linked to target networks by coincidence did fairly well, compared with the one that did not consider who was identified and exposed in the search for suspects.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe