Association for Computing Machinery
Welcome to the January 20, 2017 edition of ACM TechNews, providing timely information for IT professionals three times a week.

ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


New Smartwatch Application for Accurate Signature Verification Developed
Ben-Gurion University of the Negev (Israel) (01/19/17)

Researchers at Ben-Gurion University of the Negev (BGU) and Tel Aviv University (TAU) in Israel say they have developed a system that uses smartwatch devices and software to verify handwritten signatures and detect even the most skilled forgeries. The new method for online signature verification utilizes motion sensors found in conventional devices. The software employs motion data gathered from a user's wrist to identify the writer during the signing process. The system, using data compiled from accelerometer and gyroscope sensors, analyzes changes in rotational motion and orientation, and trains a machine-learning algorithm to distinguish between genuine or forged signatures. The researchers recruited 66 TAU students to record 15 samples of their genuine signature using a digital pen and a tablet while wearing a smartwatch on their writing hand. Each student then studied trace recordings of other people's genuine signatures and was asked to forge five of them. "The results for both random and skilled forgery tests were encouraging, and confirmed our system is able to successfully distinguish between genuine and forged signatures with a high degree of accuracy," says BGU's Ben Nassi.


Artificial Fingertip That 'Feels' Wins Harvard's Robotics Competition
University of Bristol News (01/18/17)

Researchers from the University of Bristol in the U.K. recently won Harvard University's international Soft Robotics competition for TacTip, an open source three-dimensionally-printed fingertip that can feel in a similar way to the human sense of touch. The robotic fingertip has a unique design in which a webcam is mounted inside the fingerprint to track internal pins that act like touch receptors inside human fingertips. "An artificial sense of touch is the key for enabling future robots to have human-like dexterity," says Bristol's Nathan Lepora. "Applications of artificial touch span from the future robotization of manufacturing, food production, and healthcare, to prosthetic hands that restore a sense of touch." Bristol researcher Benjamin Ward-Cherrier says TacTip is an inexpensive artificial fingertip with a design that opens up the field of tactile robotics to many more researchers. More than 80 teams entered Harvard's 2016 Soft Robotics competition, which aims to develop and showcase novel robots and fundamental research related to soft robotics. The competition was divided into three categories: the most significant contribution to fundamental research in soft robotics, a design competition for college-level students, and a design competition for high school students.


After First Week, AI System Is Beating Human Poker Players
Computerworld (01/18/17) Sharon Gaudin

An artificial intelligence (AI) system called Libratus has gained the upper hand against human poker players in the first week of a 20-day tournament in Pittsburgh. As of Wednesday, Libratus had played more than 34,000 hands of poker with about 120,000 hands likely by the contest's conclusion. The program led by slightly more than $74,000 on the first day and had more than doubled that sum by the second day, while by the seventh day it had raised its lead to $231,329 in chips. Carnegie Mellon University professor and lead Libratus developer Tuomas Sandholm says poker is an accurate measure of AI's power because it is more complex than chess or Go. "Poker poses a far more difficult challenge than these games, as it requires a machine to make extremely complicated decisions based on incomplete information while contending with bluffs, slow play, and other ploys," Sandholm says. ZK Research analyst Zeus Kerravala says he assumes the longer the game lasts, the more data--and skill--the AI accumulates. "For humans, poker is a combination of skill, intuition, and emotion," he notes. "With the AI, it's based on learned information and data. Poker is a good game [to test AI against humans] because you play the other players as much as you play the cards."


Computing and the Fight Against Epidemics
The Huffington Post (01/18/17) Alessandro Vespignani

Addressing a resurgence in infectious diseases compounded by increased societal interconnectedness will require tapping a vast corpus of intelligence using computing and data science, writes Northeastern University professor Alessandro Vespignani. He says harnessing such "intel" could support the real-time acquisition and analysis of highly resolved digital data. "From mining Twitter posts to analyzing the flu season to using cellphone data and satellite imagery to understand the population movements driving the dissemination of epidemic diseases, a computational approach would strengthen the usual disease surveillance system and provide public health systems with new lenses on human social behavior," Vespignani writes. He also sees this approach enabling predictive epidemic modeling. "Large-scale data-driven models of infectious diseases provide real- or near-real-time forecasts of the size of epidemics, their risk of spreading, and the dangers associated with uncontained disease outbreaks," Vespignani notes. He says these models also deliver justifications and quantitative analysis to uphold public health decisions and intervention plans. However, Vespignani cites the need to resolve political and scientific challenges, including the validation of new methodologies and defining appropriate data sharing policies during health crises, before such forecasting techniques can be counted on for accuracy and utility.


For Driverless Cars, a Moral Dilemma: Who Lives or Dies?
Associated Press (01/18/17) Matt O'Brien

Researchers at the Massachusetts Institute of Technology (MIT) are asking people how they think an autonomous car should handle life-or-death decisions. The study's goal is not only to inspire better algorithms and ethical tenets for self-driving cars, but also to understand what it will take for society to accept these vehicles. Most people indicate an autonomous car should act in the greater good and sacrifice its passenger to save a crowd of pedestrians, but many people balk at the idea of buying and riding in such a vehicle. A website created by MIT researchers called the Moral Machine asks users to judge who should live or die in various scenarios involving a self-driving car. Preliminary research based on millions of responses from more than 160 countries shows significant differences between participants in Eastern and Western countries. Responses from the U.S. and Europe tend to reflect the principle of minimizing total harm over all else. "There is a real risk that if we don't understand those psychological barriers and address them through regulation and public outreach, we may undermine the entire enterprise," says MIT professor Iyad Rahwan. "People will say they're not comfortable with this. It would stifle what I think will be a very good thing for humanity."


Testing the Methods of Neuroscience on Computer Chips Suggests They Are Wanting
The Economist (01/21/17)

University of California, Berkeley's Eric Jonas and Northwestern University professor Konrad Kording ran experiments to see whether applying neuroscience analysis techniques to a computer chip would produce data conforming to established knowledge about the microprocessor's functionality. They employed a MOS Technology 6502 chip to test several scenarios, only to generate some unusual false positives. For example, Jonas and Kording used algorithms to determine the operations of five transistors strongly correlated with the luminosity of the most recently displayed onscreen pixel. However, none of the transistors played a direct role in drawing pictures on the screen, but were only utilized by some part of the program that ultimately decided what would be displayed. In another experiment, Jonas and Kording found disabling specific transistors caused the chip to disrupt one video game while permitting it to run other games, even though the components were not uniquely responsible. Jonas notes the basic problem is the neuroscience approaches failed to find many chip structures the researchers knew existed, and which are essential for understanding its operations. Jonas compares this challenge to the Human Genome Project, which turned out to be much more complicated than anticipated in terms of gaining important medical insights.


DARPA Wants to Simulate How Social Media Spreads Info Like Wildfire
Network World (01/18/17) Michael Cooney

The Computational Simulation of Online Social Behavior (SocialSim), a new program from the U.S. Defense Advanced Research Projects Agency (DARPA), aims to better understand the spread and evolution of information online. SocialSim will develop technologies to support computational simulation of online social media activities, as existing approaches to social and behavioral simulation are limited. The U.S. government currently employs experts to speculate how information spreads, and the accuracy of their conclusions is unknown. DARPA scientists say highly accurate and scalable computerized simulations could help analyze strategic disinformation campaigns by adversaries, deliver critical information to populations affected by disasters, and contribute to other information-gathering missions. Specific SocialSim objectives will include the development of large-scale simulation technologies and efficient methods for providing data to support simulation. Methods and metrics for assessing the accuracy and scalability of simulations also will be explored. SocialSim is the most recent of DARPA's social media projects. Last year, the agency announced the Force Protection in the Online Information Environment program, which seeks to develop automated software to detect online threats against U.S. service members stationed overseas.


Brainwaves Could Act as Your Password--but Not if You're Drunk
New Scientist (01/19/17) Nicole Kobie

Rochester Institute of Technology (RIT) researchers tested the theory that although electroencephalogram (EEG) readings can accurately authenticate someone's identity about 94 percent of the time, there could be confounding factors, such as whether the patient is inebriated. "Brainwaves can be easily manipulated by external influences such as drugs [like] opioids, caffeine, and alcohol," says consultant Tommy Chin. "This manipulation makes it a significant challenge to verify the authenticity of the user because they drank an immense amount of alcohol or caffeinated drink." The Rochester researchers analyzed people's brainwaves before and after drinking shots of whiskey, and they found brainwave authentication accuracy could fall to 33 percent in inebriated users. Separately, University of California, Berkeley professor John Chuang found EEG authentication accuracy degrades immediately following a workout, and suggests other factors such as hunger, stress, or fatigue also could reduce reliability. He notes if accuracy under different conditions were required, it could be possible to collect multiple brainwave "templates" for a user by separately mapping their EEG signature under a range of conditions. Chin and RIT graduate student Peter Muller also found it is possible to modify the EEG data analysis using machine learning to improve the results for participants who were drunk.


Pioneering AI Researcher to Advise RBC's Machine Learning Lab
CBC News (Canada) (01/18/17) Matthew Braga

University of Alberta professor Richard Sutton, hailed by peers as "the father of reinforcement learning," will advise the Royal Bank of Canada's (RBC) new machine-learning research unit and work with its second artificial intelligence (AI) research lab. Sutton says researchers in the world of finance have "only scratched the surface of what reinforcement learning can do." Scientists in major Canadian cities have set up AI research hubs. "All of these efforts really began, not only because AI is critical for us overall...but also because we've seen a lot of developments come out of Canadian universities that have shaken up the industry," says RBC Research director Foteini Agrafioti. As chief of RBC's machine-learning lab at the University of Toronto, Agrafioti notes machine learning is "one of those types of technology that you would really find underpinning many different applications across our businesses." He says it is often used for managing fraud, risk, and client services. Meanwhile, a spokesperson for RBC says the bank's AI investment in the coming years "will be in the tens of millions of dollars."


Engineers Eat Away at Ms. Pac-Man Score With Artificial Player
Cornell Chronicle (01/17/17) Syl Kacapyr

Engineers at Cornell University have developed an artificial "Ms. Pac-Man" player using a new approach for computing real-time game strategy. A decision-tree method was used to derive the computer's optimal moves from a series of geometry and dynamic equations that predict the movements of the player's adversaries with 94.6-percent accuracy. As the game continues, the decision tree is updated in real time, and researchers have produced a laboratory score of 43,720 on the game, beating the record score at an annual competition. The artificial player could not produce better scores against advanced players of the arcade game, but the model did achieve higher scores than beginners and intermediate players. The artificial player beat advanced players in the upper levels of the game in which speed and spatial complexity were more challenging. Researchers are interested in designing artificial players because they provide a foundation for developing new computation methods for surveillance, search and rescue, and other robotics applications. "Engineering problems are so complicated, they're very difficult to translate across applications," says Cornell professor Silvia Ferrari. "But games are very understandable and can be used to compare different algorithms unambiguously because every algorithm can be applied to the same game."


Google Brain Team Prepares for Machine-Learning-Driven Future
SD Times (01/13/17) Madison Moore

The Google Brain team responsible for several machine-learning milestones in 2016 says it will continue its research in areas including healthcare, the safety of artificial intelligence (AI), and natural-language comprehension. Google Brain's Jeff Dean, who shared the 2012 ACM-Infosys Foundation Award in the Computing Sciences (now the ACM Prize in Computing) with Sanjay Ghemawat, says last year the team demonstrated new methods for improving people's lives with advanced software systems. In one innovation, the team built on research into sequence-to-sequence neural network learning and applied it to machine translation, replacing Google Translate's translation algorithms with a new end-to-end learned system. Dean says the new system "closed the gap between the old system and human-quality translations by up to 85 percent for some language pairs." He also notes the system later demonstrated "zero-shot translation" abilities by learning to translate between languages for which it had never been given example sentence pairs. Google Brain also probed the medical benefits of machine learning, and the team demonstrated a machine-learning-driven system that diagnoses diabetic retinopathy from a retinal image. Dean says this year's aspirations for Google Brain include building machine-learning tools to help humans understand their output, and ensuring AI systems that make complex decisions are fair as well as safe.


Training Computers to Differentiate Between People With the Same Name
IUPUI Newsroom (01/12/17) Richard Schneider; Cindy Fox Aisen

Researchers from Indiana University-Purdue University Indianapolis (IUPUI) have developed a machine-learning method to better distinguish between people with the same name. Existing solutions can disambiguate an individual only if the person's records were present in the machine's training data. The new method can perform non-exhaustive classification, meaning the computer can detect when a previously unseen record should be differentiated from records with the same name. For each name value, the machine-learning algorithm was trained by using records of different individuals with the same name collected for a variety of sources, such as Facebook, blog posts, and public records. Three types of features pulled from people's online posts offer some degree of predictive power to define a specific individual. First, relational or association features are used to reveal persons with whom an individual is associated, and then text features such as keywords in a document can be analyzed to find repeated usages of certain types of terms. Finally, venue features can help define the institutions, memberships, or events with which an individual is associated. The new method is scalable, and the IUPUI researchers believe the algorithm will be able to compile records belonging to a single person even if thousands of people share the same name.


Harvard/MIT Report Analyzes 4 Years of MOOC Data
Campus Technology (01/12/17) Dian Schaffhauser

A new study by Massachusetts Institute of Technology (MIT) professor Isaac Chuang and Harvard University professor Andrew Ho analyzed 2.3 billion events logged by 4.5 million participants in massive open online courses (MOOCs) over four years. The study, covering 290 MOOCs from fall 2012 through summer 2016, noted the continued growth of course registrations, with 159,000 individuals earning 245,000 free and paid certifications. The study also found the median number of active participants in a MOOC course is 7,902, while 500 are certified by a typical course. Moreover, although computer science (CS) MOOCs comprised only a fraction of the course total, the average number of CS course participants was 21,040, versus 7,905 in a non-CS science, technology, engineering, or math course. The data also suggests continued slippage in enrollment for repeated MOOCs, although this varies by topic. "Some courses increase in enrollment and others decline by 50 percent or more," Chuang and Ho note. The new report builds on earlier benchmark studies examining the first two years of MOOCs deployed on MIT and Harvard's nonprofit edX learning platform. Ho says the MIT/Harvard report "helps institutions, faculty, students, and the public learn more about these unprecedented global classrooms."


Abstract News © Copyright 2017 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]