Association for Computing Machinery
Welcome to the February 17, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Creating a Computer Voice That People Like
The New York Times (02/14/16) John Markoff

Creating a computerized voice that is indistinguishable from a human one for anything longer than short phrases remains an elusive goal, given the challenges with successfully simulating prosody and pronunciation, to name two areas. The best techniques for natural-sounding synthesized speech start with a human voice that is used to generate a database of components and sub-components of speech spoken in many different ways. "The problem is we don't have good controls over how we say to these synthesizers, 'Say this with feeling,'" notes Carnegie Mellon University professor Alan Black. Designers of programs intended to collaborate with people or serve as companions often say they do not want to try to deceive the humans that the machines are communicating with them, but they still want to support a human-like relationship between the user and machine. Improving speech technology will lead to new, powerful, and possibly alarming uses. Israeli software company Imperson, which develops conversational characters for entertainment, is now weighing a move into politics. Imperson thinks during a campaign, a politician would be able to use an avatar on a social media platform that could engage voters. "People will understand, and there will be no uncanny-valley problem," says Imperson co-founder Eyal Pfeifel.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


The Best AI Still Flunks 8th Grade Science
Wired (02/16/16) Cade Metz

University of Washington professor Oren Etzioni and the Allen Institute for Artificial Intelligence recently hosted a contest to see if artificial-intelligence (AI) systems built by about 800 research teams could pass an eighth-grade science test, only to learn the best-performing programs only answered about 60 percent of the questions correctly. The contest sought to assess natural-language processing, an area gaining interest with the emergence of deep neural nets. The contest's outcomes reflect AI technology's general inability to think like humans. Despite the machines' use of cutting-edge methods, they were unable to pass the test, which presented the questions in a multiple-choice format. "Natural-language processing, reasoning, picking up a science textbook and understanding--this presents a host of more difficult challenges," Etzioni says. "To get these questions right requires a lot more reasoning." Etzioni is unsure whether technology giants such as Google, with top researchers working on the challenge, would fare any better in the contest. "In most competitions, I think the winning models are very specific to the test dataset, so even companies that work in the same domain don't necessarily have a significant advantage," says Israeli researcher Chaim Linhart.


Robot Art Raises Questions About Human Creativity
Technology Review (02/15/16) Martin Gayford

Art created by machines raises unanswered questions about its potential and whether it can truly be defined as creative or imaginative, and one example of this technology is The Painting Fool computer program. Its creator, Goldsmiths College professor Simon Colton, suggests programs must pass something different from the Turing test to be designated creative, by exhibiting behavior that expresses skill, appreciation, and imagination. The Painting Fool can execute pictures of subjects in different moods, responding to emotional newspaper articles and reflecting the cumulative mood it tallies up. Meanwhile, artificial-intelligence systems developed by Google's Brain AI researchers employ a neural net to take abstract images and modify them so they manifest a resemblance to objects the software has been trained to recognize. Another development is University of California, San Diego professor Harold Cohen's collaboration with his autonomous painting program, AARON. AARON operates by Cohen's principle that "making art [does not] have to require ongoing, minute-by-minute decision-making...that it should be possible to devise a set of rules and then, almost without thinking, make the painting by following the rule." Cohen thinks AARON exercises creativity, as "with no further input from me, it can generate unlimited numbers of images, it's a much better colorist than I ever was myself, and it typically does it all while I'm tucked up in bed."


Eye-Opening Optical Research Projects That Could Supercharge the Internet
Network World (02/15/16) John Edwards

Several research teams around the world are independently investigating optical technologies that have the potential to make networks--and the Internet--faster and more efficient. "Extending reach and/or capacity of the fiber-optic network is essential to accommodate growth," says Nikola Alic, a photonics researcher at the University of California's Qualcomm Institute in San Diego. Alic's research team recently developed a method of increasing the maximum power at which optical signals can be sent through optical fibers, thereby lengthening the maximum distance the signals can travel. The approach could boost the data transmission rates of the fiber-optic cables used by the Internet and other types of networks. Universite Laval professor Wei Shi and colleagues have developed a new type of tunable filter with the aim of integrating the device into a photonic chip. Tunable filters are key components in high-capacity optical networks. Meanwhile, Radan Slavik from the University of Southampton's Optoelectronics Research Center and colleagues say they have developed technology that could replace the costly and power-inefficient external modulators used to generate modulation format signals.


Automatic Contingency Planning
MIT News (02/15/16) Larry Hardesty

Researchers at the Massachusetts Institute of Technology (MIT) and Australian National University (ANU) have developed a planning algorithm that also generates contingency plans, in case the initial plan proves too risky. The algorithm also identifies the conditions that should trigger a switch to a particular contingency plan, and it provides mathematical guarantees that its plans' risk of failure falls below some threshold, which the user sets. The range of possible decisions a planner faces can be represented as a graph, consisting of nodes, which are represented as circles, and edges, represented as line segments connecting the nodes. In a planning system, each node of the graph represents a decision point, and a path through the graph can be evaluated according to the rewards it offers and the penalties it imposes. The optimal plan is the one that maximizes reward, and taking probabilities into account makes that type of reward calculation much more complex. Therefore, for even a relatively simple planning task, canvassing contingency plans can be prohibitively time-consuming. As part of the new MIT/ANU technique, the planner must ask the user to set risk thresholds before starting to construct the graph. The algorithm treats these thresholds as a "risk budget," which it spends as it explores paths through the graph.


UMD-Led Team First to Solve Well-Known Game Theory Scenario
UMD Right Now (02/16/16) Matthew Wright

Researchers at the University of Maryland (UMD), Stanford University, and Microsoft Research say they have solved the "Colonel Blotto" game theory scenario, which has been used to analyze the potential outcomes of elections and other two-party conflicts. "As long as we have sufficient data on a given scenario, we can use our algorithm to find the best strategy for a wide variety of leaders, such as political candidates, sports teams, companies, and military leaders," says UMD professor Mohammad Hajiaghayi. Colonel Blotto pits two competitors against one another and requires each to make difficult decisions on how to deploy limited resources, and the new algorithm shows that such strategic behavior is computationally tractable. "Given a description of the competition, we can determine which strategies will maximize the outcomes for a given player," Hajiaghayi says. The algorithm represents an equilibrium in which both players have deployed the best strategy they possibly can in relation to their opponent's strategy. One of the major hurdles to finding a computational solution to the Colonel Blotto game was the large variety of possible strategies the players could employ. The researchers overcame this issue by limiting the total number of possible strategies to a smaller number of representative choices.


Increasing Number of Women in Computing Hinges on Changes in Culture, Not Curriculum
CMU News (02/15/16) Byron Spice

Carnegie Mellon University (CMU) researchers Carol Frieze and Jeria Quesenberry recently published a book outlining how the number of women who pursue computer science will only increase if the culture of computer science departments changes. They note a cultural makeover at CMU's School of Computer Science is one of the reasons the school consistently attracts and graduates a higher percentage of female computer science students than the national average. "Here at CMU, in a more balanced environment, we've not seen the familiar, simplistic gender divide in computer science," Frieze says. "Rather we've found men and women relate to computer science through a spectrum of attitudes and with more similarities than differences." She and Quesenberry stress it is important to not marginalize women, and to make sure they are integrated into the school so they receive the same opportunities, visibility, and networking that have worked well for most men. CMU's approach began to change in 1999, when Lenore Blum joined the computer science faculty and formed Women@SCS, a faculty/student organization designed to connect women across the departments within the school. Although CMU's approach may not work for every computer science program, Frieze and Quesenberry hope their book will provide insights that help other programs become more inclusive.


Why Sarcasm Is Such a Problem in Artificial Intelligence
The Stack (UK) (02/11/16) Martin Anderson

Monash University researchers in India and Australia recently published a paper outlining 10 years of research efforts from groups interested in detecting sarcasm in online sources. The need to accurately identify sarcasm in online sources applies both to artificial intelligence (AI) to assess archive material or interpret existing datasets, and in the field of sentiment analysis, in which a neural network seeks to interpret data based on publicly posted Web material. Researchers have struggled to quantify sarcasm because it may not be a discrete property in itself, but rather part of a wider range of data-distorting humor. As a result, the researchers say sarcasm may need to be identified as a subset of that in order to be identified programmatically. They note most of the research projects that have addressed the problem of sarcasm as a hindrance to machine comprehension have studied the problem as it relates to the English and Chinese languages. However, some work also has been done in identifying sarcasm in Italian-language and Dutch tweets. The new paper details the ways academia has approached the sarcasm problem over the last decade, and concludes the solution to the problem is a sophisticated matrix that has some ability to understand context.


Data Analysis of GitHub Contributions Reveals Unexpected Gender Bias
Ars Technica (02/11/16) Annalee Newitz

A rigorous analysis of millions of GitHub pull requests for open source projects found women's contributions were accepted more frequently than men's, but only if they were associated with gender-neutral profiles. However, women whose GitHub profiles indicated their genders had a much harder time. The researchers note they "augmented this GHTorrent data by mining GitHub's Web pages for information about each pull request status, description, and comments," but the lack of gender information in GitHub profiles was a challenge. The researchers met this challenge and determined the genders of more than 1.4 million users by linking their email addresses with Google+ profiles that list gender. They originally expected women's GitHub contributions to be accepted with less frequency, but the reverse trend was observed when they examined the "merge rate" of women's contributions. The researchers found 78.6 percent of women's pull requests were accepted and merged into the code, versus only 74.4 percent of men's pull requests. They also found contributions from unknown women were accepted less often than contributions from unknown men, leading to the theory that some sort of social bias is at work.


My Robot Valentine: Could You Fall in Love With a Robot?
The Conversation (02/10/16) Kate Letheren; Jonathan Roberts

In the future, Valentine's Day for some could involve a romantic dinner with a robot, speculate Queensland University of Technology researchers Kate Letheren and Jonathan Roberts. They say machines could develop into something more sophisticated and more human-like, which could lead to people seeing them as potential romantic partners. Letheren and Roberts note recent opinion suggests people might even fall in love with their robot companions. They observe it is already normal to love and welcome pets as family members, and recent studies show people feel a similar amount of empathy for robot pain as they do for human pain. However, Letheren and Roberts say if scientists are to develop robots that can mirror and express their digital love for humans, a definition of love will need to be established. They also point out society may have a difficult time accepting human-robot relationships, and digital love may have a harmful effect on human relationships. Scientists will need to consider "whether robots should be programmed to have consciousness and real emotions so they can truly love us back," say Letheren and Roberts.


ScaAnalyzer: An Award-Winning Tool to Find Computing Bottlenecks
College of William & Mary (02/12/16) Joseph McClain

College of William & Mary researchers have developed ScaAnalyzer, a new tool they say could have considerable value to the supercomputing community. ScaAnalyzer can find elusive bugs in software and enable computers to run faster and more efficiently. The tool is designed to address scalability problems, which can prevent applications from expanding and taking advantage of the increased computing potential of a multi-core system. The researchers say the tool takes aim at a computer's memory subsystem and can help pinpoint trouble areas in both software and hardware. "The hardware designers can design different memory layers. A layer might have different features like size, speed, bandwidth," says William & Mary computer scientist Xu Liu, who developed the tool with Bo Wu, who is now a member of the faculty at the Colorado School of Mines. "We can give this feedback to the hardware vendors, and tell them, 'Maybe you should focus on this memory layer.'" Liu and Wu's paper on the tool was named Best Paper at the Supercomputing '15 conference. They plan to make ScaAnalyzer available for free as an open source utility.


Predicting Box Office Boffo or Bomb
Iowa Now (02/10/16) Tom Snee

University of Iowa (UI) researchers have developed an analytical system that predicts the probability of a movie's profitability at the box office. They identified several factors that contribute to a movie's success, including the people involved in making the film, the plot and genre, and when the film was released. The system uses a machine-learning, data-based algorithm to analyze those factors and determine the probability of a film earning a profit of at least $7.3 million, which the researchers considered to be a reasonable profit on an investment. The researchers trained the algorithm on every film released in the U.S. between 2000 and 2010. They used it to analyze 2,506 movies released between 2000 and 2010 and found only 36 percent of the movies made money, while box office receipts had little correlation to profitability. For example, if a movie has several big stars, the film is likely to sell a lot of tickets. However, those stars cost so much to hire that it reduces the likelihood the movie will be profitable. "It's easier to predict the box office receipts if you have star power, but that doesn't help in predicting profitability because the actors charge such a hefty fee upfront that it reduces the profit," says UI professor Kang Zhao.


Shaping Tomorrow's Smart Machines: Q&A With Bioethicist Wendell Wallach
Yale News (02/15/16) Jim Shelton

In an interview, Yale University's Wendell Wallach says more consideration of machines and morality must be given as artificial intelligence (AI) continues its penetration into all aspects of society. "[Machines] making explicit moral judgments in many different contexts depends upon a clear and full understanding of the situation at hand," Wallach notes. He says realizing this entails giving AI consciousness and other capabilities, which scientists do not know how to instill. Wallach also observes smart machines are still highly primitive at exhibiting the same adaptive behavior as people in terms of wisdom, compassion, and creativity. He says intelligent machine creation has pressured scholars to adopt a comprehensive mindset about the many skills and capabilities that play a role in appropriate decision-making. "Reason alone is not sufficient to produce intelligent machines capable of acting appropriately in a world inhabited by other people, animals, and an environment worthy of care and consideration," Wallach argues. He recommends focusing 10 percent of AI/robotics research funding on studying and adapting to the societal effect of intelligent machines, establishing an oversight and governance coordinating panel for AI/robotics, and issuing a presidential directive that lethal autonomous weapons systems violate international humanitarian law. He stresses without banning autonomous weaponry, the dangers in AI evolution will rise exponentially.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe