Association for Computing Machinery
Welcome to the December 28, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


The AI Anxiety
The Washington Post (12/27/15) Joel Achenbach

Some of the world's most prestigious scientists are concerned artificial intelligence (AI) and other technologies may be advancing toward a convergence that could exceed people's ability to keep them under control, giving rise to serious threats to humanity. Swedish philosopher Nick Bostrom is worried about a lack of sufficient safeguards to prevent scenarios such as the emergence of a superintelligent machine with the potential to harm people due to an absence of instilled human values. Massachusetts Institute of Technology (MIT) professor Max Tegmark also is pursuing safer AI, and he co-founded the Future of Life Institute with this goal in mind. Yet despite an attitude of fear toward machine intelligence--a fear chiefly promulgated by the media--most AI researchers think such anxieties are premature. "The progress has not been as steady as people say, and the machine skills are really far from being ready to match our skills," says professor Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory at MIT. Meanwhile, MIT's Boris Katz says a more worrying issue than malevolent, superintelligent machines is the empowerment of machines that are not very intelligent. He says when the rules governing their function "are not fully thought through...then sometimes the machine will act in the wrong way."
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Australian Computer Scientist Makes an Algorithm That Predicts Elections
Australian Financial Review (12/23/15) Anne Hyland

University of Queensland (UQ) professor Xue Li says he has developed an opinion search engine to predict elections with a high degree of accuracy by sifting through social media in many different languages for viewpoints on numerous topics. Li will employ the search engine to forecast the outcomes of Australia's 2016 federal election on a seat-by-seat basis. UQ uses Li's tool to help manage its brand by having researchers mine and analyze the information it collects as to why students and faculty prefer the school to rival universities, or to learn about priority issues that can be fed back to UQ's executive. Li envisions many uses for the opinion search engine, including a way to curb cyber bullying among students in collaboration with the University of Technology Sydney. He says big data applications will have a tremendous impact on our futures. "In my view, this is game-changing technology," Li notes. "This is the first time we can consider this kind of technology and deal with big human problems. What big data can do is to help us overcome the problem of not knowing what you don't know."
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Co-Robots: Taiwanese Style
EE Times (12/23/15) R. Colin Johnson

All robots should be co-robots, or robots that work safely alongside humans without posing any danger to people, according to the International Center of Excellence on Intelligent Robotics and Automation Research (iCeiRA Lab) in Taiwan. Moreover, researchers at the iCeiRA Lab believe co-robots can be made stronger than humans and still work safely beside them. They note in the U.S., co-robots are intentionally made weak, or "conformable." ICeiRA Lab and its collaborators say the concept is not new, but already is an essential part of every robot made. The lab has made dozens of models and many have won first-place prizes in international competitions. The approach to safety in Taiwan combines a three-dimensional (3D) vision system, sensors, and seven degree-of-freedom modular robotic platforms. 3D vision enables robots to always keep track of the precise proximity of humans, while modularization of components enables them to be quickly reconfigured in order to perform tasks alongside humans without getting in their way. "We believe this concept of a co-robot is essential for every robot--no robot should have to be surrounded by a cage to keep it from hurting people," says iCeiRA Lab director Ren Luo.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Funding to Help Southampton Students to Protect the U.K. Against Cyberattacks
University of Southampton (United Kingdom) (12/22/15)

The U.K. is providing about $750,000 of funding for cybersecurity education. Eight institutions will share the money and work with the government to develop projects to help improve the cybersecurity skills of graduates. The projects are designed to aid Britain in its efforts to address a shortage of cybersecurity skills and future-proof its information technology sector by making it more resilient to cyberattacks. The University of Southampton will receive more than $75,000 for a project entitled Enhancing Campus Cyber Security Through Constructivist Student Learning. The school will investigate how universities can benefit from collaboration between outside cybersecurity experts and their own multi-disciplinary staff and students. "We will analyze how industrial cybersecurity best practices can be translated to more open campus environments, where, for example, lecturers commonly use their own preferred devices and services, to produce learning materials and improved institutional practices," says project leader Federica Paci. "Another perspective will explore how the student learning experience and the university's security posture can be enhanced through activities including supervised penetration tests of university systems and establishing an appropriate responsible disclosure policy." The recently launched project will be based at Southampton's new Cyber Security Academy.


What Happens When Facial Recognition Tools Are Available to Everyone
The Washington Post (12/23/15) Dominic Basulto

Facial-recognition software is currently limited to matching simple photos due to its computationally intensive nature, but artificial intelligence researchers at Carnegie Mellon University's (CMU) Human Sensing Laboratory will start releasing their advanced facial image analysis software to fellow scientists beginning in February. The researchers say IntraFace software has sufficient speed and efficiency to be installed on a smartphone, and this could lead to significant new opportunities for automated facial expression analysis. Free demos of the IntraFace smartphone apps show how the software can identify facial features and emotions. For example, Duke University medical researchers are using IntraFace to study facial expressions for signs of autism. Another potential use for the software is for automobiles that are able to recognize when the driver is distracted. A commercialized version also could enable companies to assess people's perception of their products by reading their faces. CMU professor Fernando De la Torre and colleagues trained IntraFace to identify and track facial features using machine-learning methods, and then produced an algorithm that can personalize this generalized comprehension of the face for individuals, facilitating expression analysis.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Algorithm Helps Turn Smartphones Into 3D Scanners
News from Brown (12/23/15) Kevin Stacey

Brown University researchers have developed an algorithm that helps convert smartphones and commercially available digital cameras into structured light 3D scanners, which they say could help make high-quality 3D scanning more affordable and available. With funding from the U.S. National Science Foundation, students led by professor Gabriel Taubin devised the algorithm to execute a structured light method without synchronization between projector and camera, enabling an off-the-shelf camera to be used with an untethered structured light flash. The camera only has to capture uncompressed images in burst mode, which many digital structured light cameras and smartphones can do. "The main problem we're dealing with [is]...we can't use an image that has a mixture of patterns," notes Brown University graduate student Daniel Moreno. "So with the algorithm, we can synthesize images--one for every pattern projected--as if we had a system in which the pattern and image capture were synchronized." Once the camera captures a burst of images, the algorithm calibrates the timing of the image sequence using the binary information within the projected pattern. It then goes through the images to construct a new sequence that captures each pattern in its entirety. After the complete pattern images are assembled, a standard structured light 3D reconstruction algorithm can be employed to generate a single 3D image of the object or space.


A Master Algorithm Lets Robots Teach Themselves to Perform Complex Tasks
Technology Review (12/21/15) Will Knight

University of California, Berkeley postdoctoral fellow Igor Mordatch has developed a learning algorithm that enables a robot to determine how to meet an end goal by itself, using software that simulates robots. Mordatch says this virtual model has some knowledge of how to make contact with objects or with the ground, and the algorithm uses these guidelines to look for the most efficient way to achieve a goal. "The only thing we say is 'This is the goal, and the way to achieve the goal is to try to minimize effort,'" he notes. "[The motion] then comes out these two principles." This year Mordatch developed a method for robots to perform repetitive behaviors that include walking, running, swimming, and flying. A simulated neural network is taught to govern the robot using data about its body, the physical environment, and the goal of moving in a specific direction. The neural network generates natural-seeming locomotion in virtual humanoid machines and flapping motions in winged robots. When an operator tells the robot where to go, its neural network makes the proper adjustments to its means of locomotion.


Can the U.S. Push the World to Accept Cyber Norms?
Federal Computer Week (12/21/15) Zach Noble

Global consensus on cyber norms--standards of responsible online behavior that countries should aspire to--is highly desirable but a long way off, according to advocates. Georgetown University professor Catherine Lotrionte speculates it will be years, perhaps decades, before a more holistic worldwide cyber agreement is reached. "We're not going to have a universal treaty signed on all activities in cyberspace," she says. Meanwhile, New America Cybersecurity Initiative co-director Ian Wallace says the U.S. may not necessarily be the best country to lead the push for cyber norms within negotiations. He and other experts think small nations, such as the Netherlands or Estonia, might be better suited to leading discussions in the United Nations. Lotrionte notes ultimately, "international law develops based on state practice." She also says with that goal in mind, countries have to clarify their positions on malware development, the use of force, and thwarting criminal hackers within their own borders. Lotrionte warns the failure of "good" nations to establish explicit cyber norms through practice will only encourage bad actors to spearhead their formation via their own practices.


HTTP Error Code 451 Will Signal Online Censorship
Help Net Security (12/22/15) Zeljka Zorz

A new, official HTTP error code is set to be introduced to denote instances in which governments restrict access to specific websites. The Internet Engineering Steering Group has approved the publication of a draft of a future standard that designates the 451 HTTP status code as an indication the server is denying access to a resource as the result of a legal demand. The code number, 451, is a reference to Ray Bradbury's dystopian novel "Fahrenheit 451," which deals with the subject of government censorship. The new code is an effort to make government censorship more transparent to those being subjected to it. "By its nature, you can't guarantee that all attempts to censor content will be conveniently labeled by the censor," says Mark Nottingham, chair of the Internet Engineering Task Force's HTTP Working Group. The group had not initially intended to issue a specific code for government censorship, but made the decision to go forward after several websites began adopting the 451 code on their own and members of the community voiced their support for the move. Nottingham says many jurisdictions likely will disable the new code, but decisions to do so will "send a strong message to you as a citizen about what their intent is."


Robots Learn by Watching How-to Videos
Cornell Chronicle (12/18/15) Bill Steele

Cornell University researchers are teaching robots to watch instructional videos so they can learn how to perform tasks via a series of step-by-step instructions. The "RoboWatch" project uses a wealth of videos posted on YouTube as the source material, and a computer scans multiple videos on the same task to derive their common structures and elements, which are reinterpreted as step-by-step, natural-language instructions. RoboWatch's video-parsing method is unsupervised, with no need for a person to explain to the robot what it is observing, according to Cornell graduate student Ozan Sener. When confronted with an unfamiliar task, the robot's computer brain sends a query to YouTube to find a collection of how-to videos on the subject. The algorithm includes routines to exclude "outliers," or videos that fit the keywords but are not instructional. Frame-by-frame scanning of the videos by the computer enables objects that appear often to be noticed, while frequently repeated words in the video's accompanying subtitled narration also are sought. The computer employs these markers to match similar segments to the various videos and organizes them into a single sequence. The researchers note the system can generate written instructions from the sequence's subtitles.


Wi-Fi Signals Can Be Exploited to Detect Attackers
Lancaster University (12/18/15)

Researchers at the University of Lancaster, the University of Oxford, and the Technical University of Darmstadt have developed a method of detecting physical attacks on wireless networks by monitoring Wi-Fi signals. The researchers developed an algorithm that monitors Wi-Fi signals at multiple receivers to detect physical attacks. The algorithm looks for changes in the pattern of wireless signals, known as Channel State Information, which can indicate efforts to tamper with the network or devices connected to it. The researchers say their algorithm can differentiate interference caused by potential attacks from that caused by natural changes in the environment, such as people watching through the path of Wi-Fi signals. "A large number of Internet of Things systems are using Wi-Fi and many of these require a high level of security," says Lancaster professor Utz Roeding. "This technique gives us a new way to introduce an additional layer of defense into our communication systems. Given that we use these systems around critically important infrastructure, this additional protection is vital." The research was presented at the Annual Computer Applications Conference (ACSAC) in Los Angeles earlier this month.


How Do Robots 'See' the World?
The Conversation (12/22/15) Jonathan Roberts

The mass proliferation of robots into all walks of life is partly constrained by the limitations of their ability to actually see the world, writes Queensland University of Technology professor Jonathan Roberts. He says robot vision starts with a video camera to capture a constant stream of images, which is then fed to a computer. Algorithms then look for and track interesting features, and software is designed to recognize patterns to help the robot comprehend its surroundings. "In essence, the robots are being programmed by a human to see things that a human thinks the robot is going to need to see," Roberts notes. "There have been many successful examples of this type of robot-vision system, but practically no robot that you find today is capable of navigating in the world using vision alone." Roberts says in recent years a new robot vision research community has emerged, developing systems that learn to see using designs modeled after animals' vision systems. Complementing the machine-learning process is the growing ability of robots to engage in a distributed "hive mind" mentality so they can share their knowledge and experience. Roberts believes the first likely applications of seeing robots will be in industries with labor shortfalls, or tasks that are too dangerous or unappealing for people.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe