Welcome to the December 2, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.
Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
AI Program Beats Humans on College Acceptance Test
NextGov.com (12/01/15) Patrick Tucker
Researchers in Japan have created an artificial intelligence (AI) program that outperforms people on a national standardized college entry exam. The program performs sufficiently well to have an 80-percent probability of gaining admission to 33 national universities, reports the National Institute of Informatics. The technology, developed by the Todai Robot Project, scored 511 points out of 950 on the National Center Test for University Admissions, significantly topping the national average of 416. The project aims to devise an AI that can win admission to the elite University of Tokyo (Todai) by 2021, a challenge requiring the program to process language and concepts in a uniquely humanistic manner. "What makes the University of Tokyo entrance exam harder [than other intelligence challenges] is that the rules are less clearly defined than they are for shogi [a Japanese strategy game similar to chess] or a quiz show," says project sub-director Yusuke Miyao. "From the perspective of using knowledge and data to answer questions, the university entrance exam requires a more human-like approach to information processing. However, it does not rely as much on common sense as an elementary school exam or everyday life, so it's a reasonable target for the next step in artificial intelligence research."
Japanese Scientists Create Touchable Holograms
Japanese researchers have created touchable holograms, three-dimensional virtual objects that can be manipulated by a human hand. The "Fairy Lights" system developed by the researchers at the Utsunomiya University Center for Optical Research and Education uses femtosecond laser technology, which is capable of firing high-frequency laser pulses that last one millionth of one billionth of a second. The pulses respond to human touch so the hologram's pixels can be manipulated in mid-air. Tsukuba University's Yoichi Ochiai, one of the lead researchers on the project, says the technology could have a variety of possible applications, including entertainment, medicine, and architecture. He notes current visual technologies such as video limit the extent to which humans can interact with them, but this technology could change that. "So, if we can project an image in a three-dimensional form, and if you can touch it, then you can make something where you'll think that there actually is something there." Further development of the technology could enable the creation of keyboards made of light beamed onto a person's lap, or visual chat that would enable users to experience the virtual touch of the person with whom they are communicating.
Computer System Will Be an Angel on Your Shoulder, Whispering Advice, Step-by-Step Instruction
Carnegie Mellon News (PA) (12/01/15) Byron Spice
Carnegie Mellon University (CMU) researchers are developing Gabriel, a computer system that will give users instructions for a wide range of tasks. Gabriel uses a wearable vision system and leverages the cloud via a separate CMU innovation called a "cloudlet," a data center that provides some of the computational power of the cloud and supports multiple mobile users. CMU researchers, led by professor Mahadev Satyanarayanan, recently have developed proof-of-concept implementations that guide the assembly of LEGO models, teach freehand sketching, and coach Ping-Pong. Satyanarayanan says Gabriel "gives you instructions when you need them, corrects you when you make a mistake and, most of the time, shuts up so it doesn't bug you." Recent advances in computer-vision technology make it possible for computers to recognize objects and understand the context of scenes. In addition, cognitive algorithms enable computers to direct tasks and cloud computing helps perform the intensive computations to run such algorithms. The speed and agility necessary for these applications are made possible by cloudlets, which are situated close to users so they are just one wireless "hop" away, reducing the round-trip time of communications to just a few tens of milliseconds.
UW Researchers Estimate Poverty and Wealth From Cellphone Metadata
University of Washington News and Information (11/30/15) Peter Kelley
University of Washington (UW) researchers have developed a method for estimating the distribution of wealth and poverty in an area by studying metadata from calls and texts made on cellphones. "Quantitative, rigorous measurements are key to making important decisions about social welfare allocation and the distribution of humanitarian aid, but in a lot of developing countries high-quality data doesn't exist," says UW professor Joshua Blumenstock. The researchers found that wealthier people in Rwanda tended to make more calls than poorer people, and those buying $10 worth of pre-paid phone time tend to be wealthier than those who buy 50 cents of time. In addition, those making calls during daytime business hours are systematically different from those who make irregular calls, and poorer people tend to receive more calls than they make because in Rwanda the caller pays for the call. "We use supervised machine-learning algorithms to sort through thousands of patterns to figure out what is most correlated with wealth and poverty," Blumenstock says. The phone metadata was overlaid onto area maps to create a visual representation of the geographic distribution of wealth. "We are hopeful that this broad approach to detecting signals means that the methodology would work even on different call networks from different countries," says UW graduate student Gabriel Cadamuro.
Making 3D Imaging 1,000 Times Better
MIT News (12/01/15) Larry Hardesty
Massachusetts Institute of Technology (MIT) scientists have demonstrated an ability to boost the resolution of conventional three-dimensional (3D) imaging devices by up to 1,000-fold via the polarization of light. The Polarized 3D system developed by the MIT Media Lab team began with a Microsoft Kinect combined with a polarizing photographic lens positioned in front of its camera. The researchers took three photos of an object, rotating the polarizing filter each time, and then used algorithms to compare the light intensities of the produced images. By itself, the Kinect can resolve physical features as minuscule as a centimeter or so across from several meters away. The added polarization data enables it to resolve features in the range of hundreds of micrometers. The incorporation of such a system into a smartphone camera could be possible thanks to commercially available grids of small polarization filters that can overlay individual pixels in the light sensor. The MIT team also found in some simple test cases the Polarized 3D system can leverage information contained in interfering light waves to address scattering.
Artificial Intelligence Aims to Make Wikipedia Friendlier and Better
Technology Review (11/30/15) Tom Simonite
The Wikimedia Foundation has developed a new artificial intelligence-based tool to aid its editors. The foundation's flagship English edition has seen a dramatic drop in the number of people who serve as editors for the site. Over the last eight years, the number of active editors has dropped by about 40 percent to 30,000. Research has found that part of the problem is Wikipedia's complex bureaucracy and the often hard-line response of veteran editors to newcomers' mistakes. The hope is the new automated system, called ORES (Objective Revision Evaluation Service), will help editors judge whether edits are made in good faith or not, and make the Wikipedia community more welcoming to newcomers. ORES' abilities include tools that will direct editors to review the most significant changes and features that will encourage editors to treat rookie and innocent mistakes more appropriately; for example, by sending a message to the user about the error, rather than just erasing it. The system was trained based on data gathered by an online tool editors use to tag examples of previous editors. ORES currently is available on the English, Portuguese, Turkish, and Farsi versions of Wikipedia.
How to Encrypt a Message in the Afterglow of the Big Bang
New Scientist (11/30/15) Jesse Emspak
Researchers want to use the afterglow of the Big Bang to make encryption keys. Physical randomness is viewed as a way to make truly random encryption keys, and researchers say the use of the cosmic microwave background (CMB) would take that to the ultimate extreme. There are several ways to extract numbers from the thermal radiation left over from the Big Bang, according to Jeffrey Lee and Gerald Cleaver at Baylor University. For example, a patch of sky could be divided into pixels and the strength of the CMB's radio signal, which is never duplicated exactly, could be measured. Over time, each pixel would generate a string of different strengths, which are numbers, so putting the strings from each pixel together would result in a very large random number. "An adversary measuring the same patch of sky exactly the same way and at exactly the same time could not get exactly the same values," Lee says. He points out another layer of difficulty in breaking the encryption would be matching the pattern of digits in a CMB measurement, which cannot be obtained by any other observer.
Online Tracking by News Organizations Is Excessive, Say Researchers
TechRepublic (11/27/15) Michael Kassner
Researchers at the University of Pennsylvania have found that news organizations permit far more use of third-party tracking than the average website. Professor Victor Pickard and Ph.D. candidate Tim Libert used software called webXray, developed by Libert, to detect third-party tracking. Libert says webXray works by looking for third-party HTTP requests and matching them to the companies that receive user data. "In other words, webXray allows you to see which companies are monitoring which pages," he says. Pickard and Libert used webXray to analyze the top 100,000 websites as rated by Alexa, and found users of the sites were exposed to an average of eight external servers. However, the number was far higher among media companies. Pickard and Libert found that, "among the 2,000-plus news-related websites identified by Alexa, readers are, on average, connected to over 19 third-party servers--twice as many as the 100,000 most popular sites." Some major media companies had even higher numbers. The New York Times' homepage, for example, connected to 44 third-party servers and The Los Angeles Times' website connected to 32. The researchers say such connections make it difficult for Web users to avoid third-party tracking of their online activities without resorting to the use of ad-blocking software.
Engineers Create Droid That Could Replace Firefighters, Soldiers, and Bomb Disposal Experts
Daily Mail (United Kingdom) (11/27/15) Stacy Liberatore
Researchers at the Italian Institute of Technology and the University of Pisa have developed Walk-Man, a humanoid robot the researchers say can operate human tools and interact with its environment in the same way a person would. They say Walk-Man will be a more effective design for search-and-rescue situations that are too dangerous for humans. Walk-Man can use its hands, arms, legs, and feet for stability and balance by reaching out to support itself while overcoming obstacles. The researchers want to make the robot demonstrate human-type locomotion, balance, and manipulation capabilities, according to lead researcher Nikos Tsagarakis. Walk-Man is six feet tall, weighs about 260 pounds, and its head is equipped with a stereo vision system and a rotating laser scanner to help it interpret its environment. The researchers are developing algorithms to give the robot more rapid manipulation skills, in addition to reflexive behaviors that will help it navigate uneven terrain. The goal is to equip the robot with enough perception and cognitive ability so it can operate autonomously, but with the option for a human operator to remotely take control when more advanced problem solving is needed. "The robot will transfer data, like perception data, back to the operator, and the operator will take the actions and decide what the next movement for the robot is," Tsagarakis says.
IC Professors and Grad Students Pair Infants With Robots
The Ithican (11/26/15) Maura Aleardi
Ithaca College researchers, as part of their ongoing "Tots on Bots" study, are nearing the end of their first session with five-month-old infants, during which they studied the infants' abilities to use a robotic wheelchair. The researchers attached a Wii Balance Board to a flat surface on the top of the robots with a small seat for the baby. The robotic wheelchair enables infant users to move around a room by leaning in the direction they want to go. The Wii Balance Board is calibrated when the infant is sitting up straight and can sense extra pressure in any area of the board. In addition, the robot has panels on the front that can sense other objects and stop before hitting them, according to the researchers. They also developed software that enables the robot to communicate with the Wii Balance Board to determine which way the baby is leaning. "We have the sense that if we are able to demonstrate having the option to move makes a difference in one's thinking abilities, that provides more evidence for why we should provide mobility options to babies with disabilities much earlier than we do now," says Ithaca College professor Carole Dennis.
Why Bartenders Have to Ignore Some Signals
Bielefeld University (11/25/15)
Bielefeld University researchers are leading a cooperative study funded by the European Union into how a robotic bartender, called James, can understand human communication and appropriately serve drinks socially. As part of the study, the researchers asked participants to look through the robot's eyes and ears and select actions from its repertoire. "We teach James how to recognize if a customer wishes to place an order," says Jan de Ruiter, who leads Bielefeld's Psycholinguistics Research Group. The robot does not automatically recognize which behavior indicates a customer near the bar wishes to be served, but instead perceives a list of details that is updated as soon as something changes. Each piece of information is processed independently and as equally important, and in order to understand the customers the robot must sort through the data it receives. "We designed the study as a role-playing game such that it was approachable," de Ruiter says. The customers' behavior was presented in step-by-step turns, forcing participants to decide in each step what they would do as the robotic bartender. The researchers found body language and eye contact were good initial indications a customer wants to be served, but once it is established the customer wishes to place an order, body language becomes less important.
Visual Authoring Tool Helps Non-Experts Build Their Own Digital Story Worlds
EurekAlert (11/24/15) Jennifer Liu
Researchers at ETH Zurich, Disney Research, and Rutgers University are developing a video-game interface that guides users through the process of creating a story world, helping them populate the domain with "smart" characters and objects, determine their relationships, and define events that can drive compelling narratives. "As the boundaries between content consumers and content creators continue to blur, we want to democratize story world creation and expand the pool of authors by making it possible for both experts and novice users to construct a space for compelling narrative content," says Disney Research's Markus Gross. He notes the new story world creator is designed to build up components of a full story world with the semantics required for automatic story generation. The graphical interface guides users through three main steps; the first step is story world creation, in which the user configures the scene and establishes all of the possible states and relationships. In the second step, users create "smart characters" and "smart objects" by defining how characters and objects interact with each other. Finally, the user designs Parameterized Behavior Trees, which provide a graphical, hierarchical representation for complex, multi-character interactions. "Our long-term mission is to empower anyone to create their own digital stories by providing easy-to-use, intuitive visual authoring interfaces," says Rutgers professor Mubbasir Kapadia.
Machine Learning and Big Data Know It Wasn't You Who Just Swiped Your Credit Card
The Conversation (11/25/15) Jungwoo Ryoo
Modern electronic payment fraud detection cannot continue to rely on the traditional method of data analysis paired with human participation, which is why financial companies are turning to machine learning and cloud computing to deal with a flood of big data from a multitude of transactions, writes Pennsylvania State University professor Jungwoo Ryoo. He says a machine-learning fraud detection algorithm requires training by first feeding it the normal payment data of many cardholders, and then running transactions through it--preferably in real time--to yield a probability number. The algorithm weighs numerous variables to qualify a transaction as fraudulent, such as vendor trustworthiness and a cardholder's purchasing behavior, including time, location, and IP addresses. The more information the algorithm has, the greater its accuracy in determining whether a payment is legitimate or not. Ryoo notes such systems are making heavy human interventions less necessary. Nevertheless, people can still contribute, either when validating a fraud or following up with a rejected payment. Meanwhile, cloud computing is being deployed to relieve organizations' computing infrastructure from the burden of sifting through vast volumes of transaction data. Cloud computing furnished by off-site computing resources is scalable, and is not restrained by the company's own computational limits.
Abstract News © Copyright 2015 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: firstname.lastname@example.org
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.