Association for Computing Machinery
Welcome to the May 9, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Correction: In Friday’s issue of TechNews, in the story “Artificial Intelligence: Where’s the Philosophical Scrutiny?,” it said: “The American Association for the Advancement of Science is calling for 10 percent of the AI research budget to be channeled into examining its societal effects.” However, it should have said: “There was a call at a meeting of the American Association for the Advancement of Science to devote 10 percent of the research budget on AI to the study of its societal impact.” We apologize for the error.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Imagine Discovering That Your Teaching Assistant Really Is a Robot
The Wall Street Journal (05/06/16) Melissa Korn

With online learning and the many students it supports inundating teaching assistants with often routine questions, researchers at schools such as the Georgia Institute of Technology (Georgia Tech) are testing artificial intelligence (AI) to relieve the burden. At Georgia Tech, professor Ashok Goel tapped IBM technology to develop and deploy "Jill Watson," software that helps students in his Knowledge-Based Artificial Intelligence class design programs to enable computers to meet certain challenges. Students note "Ms. Watson" would engage with them in a conversational format to remind them of assignment due dates and post mid-week questions to encourage dialogues. In 2015, Georgia Tech researchers started the AI's development by sifting through about 40,000 questions on a discussion forum and training the program to answer based on prior responses. Ms. Watson was deployed in January and by late March started posting responses to questions live. Goel says the program only answers a question if it has a minimum confidence rate of 97 percent, making its expertise far superior to that of the average online customer-service chatbot. He predicts Ms. Watson will be capable of answering 40 percent of all students' questions within a year.


Building AI Is Hard--So Facebook Is Building AI That Builds AI
Wired (05/06/16) Cade Metz

Facebook is building artificial intelligence (AI) algorithms that can help build AI algorithms by automating much of the trial and error that goes into their testing. "We wanted to build a machine-learning assembly line that all engineers at Facebook could use," says Facebook engineer Hussein Mehanna, whose team built a tool known as Flow. With Flow, engineers can build, test, and execute machine-learning algorithms on a huge scale, enabling the testing of a limitless stream of AI concepts across Facebook's data center network. Mehanna says the company uses Flow to train and test approximately 300,000 machine-learning models every month. He notes this has made it possible for Facebook to launch several new AI models onto its social network every week, whereas it used to deploy a new model onto the network about every 60 days. Mehanna says Facebook intends eventually to make Flow open source so the rest of the world, including companies such as Twitter and Uber, can use it. Another tool from his team, AutoML, runs atop Flow to automatically "clean" data needed to train algorithms so they are ready for testing without human intervention. AutoML can apply the outcomes of tests on machine-learning models to train another model that can optimize the training of other models.


Digital Media May Be Changing How You Think
Dartmouth College (05/08/16) Amy Olson

Dartmouth College researchers have found using digital platforms such as tablets and laptops for reading could make users more inclined to focus on concrete details instead of interpreting information more abstractly. In order to examine the basic question of whether processing the same information on digital versus non-digital platforms would trigger a different baseline "interpretive lens" that would influence how users interpret information, the researchers attempted to hold as many factors as possible constant between the digital and non-digital platforms. The study consisted of four tests and more than 300 participants, which evaluated how information processing is affected by each platform. The researchers found reading comprehension and problem-solving success were affected by the type of platform used. For abstract questions, participants using the non-digital platform scored higher on inference questions with 66 percent correct, compared to those using the digital platform, who had 48 percent correct. For concrete questions, participants using the digital platform scored better with 73 percent correct, while those using the non-digital platform had 58 percent correct. When presented with a table of information, 66 percent of participants using the non-digital platform reported the correct answer, compared to 43 percent of those using the digital platform. The researchers will present their work this week at ACM CHI 2016 in San Jose, CA.


Design Tool Enables Novices to Create Bendable Input Devices for Computers
Phys.org (05/06/16)

Researchers at ETH Zurich and Disney have developed DefSense, software that enables non-experts to design and build flexible objects that can sense when they are being deformed and be used to control other electronic devices. The research team will present the optimization-based algorithm this week at ACM CHI 2016 in San Jose, CA. "[Three-dimensional-printed] objects that can sense their own deformation will open the door to a range of exciting applications, such as personalized toys, custom game controllers, and electronic musical instruments," says Disney Research's Markus Gross. The objects can sense when they are bent because they are made with piezoresistive materials. The electrical resistivity of the wires changes when they are bent, enabling it to infer the amount of deformation based on changes in measured resistivity. "Determining how to route the sensors within the object and how to interpret their readings is a complex design problem in all but the most trivial cases," says Disney researcher Moritz Bacher. The researchers developed a design process in which the user creates the shape of the object and specifies the deformations that need to be sensed. An optimization algorithm then computes sensor layouts based on those example deformations and iteratively guides the designer in placing the wires within the object.


The Scientists Who Simulate the End of the World
Co.Design (05/06/16) Kelsey Campbell-Dollaghan

The U.S. National Infrastructure Simulation and Analysis Center (NISAC) models how national infrastructure and human behavior would be affected by attacks or catastrophes ranging from cyber-sieges to global pandemics to severe weather. The analytical discipline developed at NISAC, founded in 1999, is known as Complex Adaptive Systems of Systems (CASoS), which applies chaos theory and other concepts to real-world problems as they transpire. CASoS uses modern computing resources to simulate not only billions of actors and systems, but also how those systems interact globally, as well as their adaptation patterns based on ecosystem-wide dynamic changes. One application of CASoS by NISAC was a model of a global avian flu pandemic in 2005 to test a hypothesis that the best aversion strategy would be thinning the potential network of infectees. NISAC reflects the importance to the federal government of predicting how adaptive systems will respond to and evolve from disruptions, and some issues involve simulating human behavior at times of crisis. NISAC's modus operandi is to develop a program with the U.S. Department of Homeland Security outlining the topics it might study for that year to create a document other federal agencies can use in a crisis. For example, the work of NISAC in the decades ahead could focus on developing predictive tools cities can use to model and adapt to extreme weather.


AIs Are Starting to Learn Like Human Babies by Grasping and Poking Objects
Quartz (05/05/16) Joon Ian Wong

A project at Carnegie Mellon University (CMU) could enable artificial intelligences (AIs) to learn in a more human way. The CMU researchers note babies learn by poking and pushing, and say their goal is to use physical robotic interactions to teach an AI to recognize objects. The team programmed a robotic arm to grasp, push, poke, and perceive an object from multiple angles, and they enabled it to interact with 100 objects and collected 130,000 data points. The researchers fed the data into a convolutional neural network to train it to learn a visual representation of each of the 100 objects. The neural network was able to more accurately classify images of the objects on the ImageNet research database with the touch data than without it. "The overall idea of robots learning from continuous interaction is a very powerful one," says University of Washington professor Sergey Levine. "Robots collect their own data, and in principle they can collect as much of it as they need, so learning from continuous active interaction has the potential to enable tremendous progress in machine perception, autonomous decision-making, and robotics."


Not Lost in Translation: Researchers 'Teach' Computers to Translate Accurately
IDG News Service (05/06/16) Agam Shah

Scientists are seeking to improve the capabilities of online translation programs by embedding new artificial intelligence methods that could help accurately build complete sentences. University of Liverpool researchers have developed algorithms that translate words and languages while incorporating a human-like touch that could potentially boost accuracy. The algorithms will enable a computer to translate a word from an unknown language, and then infuse context within so it can build a proper sentence by adding words around it. The algorithms are designed to look up the meaning of words via services such as WordNet, and use a scoring mechanism to gauge the correlation of words when constructing a sentence. Liverpool researcher Danushka Bollegala says the programs' ability to help computers understand words is similar to "teaching languages to computers." He says the technology is one step toward creating a precise universal translator. It is currently possible to translate words with high rates of accuracy via Google Translate, but this technique can still yield poor sentence structures and frequently misconstrued meanings.


Chameleon: Why Computer Scientists Need a Cloud of Their Own
HPC Wire (05/05/16) Tiffany Trader

The U.S. National Science Foundation-funded Chameleon cloud testbed in less than a year of operation has contributed to innovative research in high-performance computing (HPC) containerization, exascale operating systems, and cybersecurity. Chameleon principal investigator Kate Keahey, a Computation Institute fellow at the University of Chicago, describes the tool as "a scientific instrument for computer science where computer scientists can prove or disprove hypotheses." Co-principal investigator Dan Stanzione, executive director of the Texas Advanced Computing Center at the University of Texas at Austin, says Chameleon can meet the oft-denied request from the software or computer science research community to make fundamental changes to the way the machine operates. With Chameleon, users can configure and test distinct cloud architectures on various problems, such as machine learning and adaptive operating systems, climate modeling, and flood prediction. Keahey says support for research at multiple scales was a key design element of the instrument. One project using Chameleon involves comparing performance between containerization and virtualization as they apply to HPC applications. Keahey says it is "a good example of a project that really needs access to scale." Another major Chameleon user is the Argo Project, an initiative for designing and prototyping an exascale operating system and runtime.


NYU Tandon Doctoral Student's Cochlear Implant Technology Banishes Ambient Babble
NYU Tandon School of Engineering (05/03/2016)

People with cochlear implants and hearing aids often have difficulty understanding what someone is saying due to "babble," or the mix of speech and other ambient sounds. New York University (NYU) researchers have devised an algorithmic approach that filters the talker's voice from a background of noise. NYU doctoral student Roozbeh Soleymani developed a technology called Speech Enhancement using Decomposition Approach (SEDA) with professors Ivan Selesnick and David Landsberger in the NYU Tandon Department of Electrical and Computer Engineering and the NYU Langone Department of Otolaryngology, respectively. SEDA works by decomposing a speech signal into waveforms that differ not just in frequency but also in how many oscillations each wave contains. "Some waveforms in the SEDA process comprise many oscillations while other comprise just one," says Selesnick, whose U.S. National Science Foundation-funded research in 2010 helped trigger Soleymani's work. "Waveforms with few oscillations are less sensitive to babble, and SEDA is based on this underlying principle," Soleymani says. Selesnick notes this powerful signal-analysis method is practical only now because of the computational power available in today's electronic devices. The potential uses for SEDA could encompass cellphones as well, according to Landsberger.


A New Mobile Phone App for Grassroots Mapping
University of Exeter (05/04/16)

University of Exeter researchers have developed a mobile phone application that uses geographic data to map landscapes and help humanitarian rescue workers in disaster-struck regions. The app enables a stranded smartphone to be converted into a self-contained remote sensing device. It uses conventional sensors already in existing smartphones to generate ready-to-use spatial data when the device is suspended from lightweight aerial platforms such as drones or kites. The app gathers the data and enables the smartphone to operate autonomously, so once airborne it can capture images. In addition, the app can be "live-coded," which means it is not fixed in its functionality, so the user can program it to behave as desired and capture images according to specific criteria. "We found that the best results were obtained when the phone was attached to a stable single-line kite or to a gliding drone so as to limit the vibrations, but there will undoubtedly be a wide range of ways of capturing high-quality data using this app and we are really keen to learn about the ways it is being used," says FoAM Kernow director Dave Griffiths, who collaborated on the research.


New Wi-Fi-based Network Keeps Facebook Servers at Bay
The Stack (UK) (05/04/16) John Bensalhia

University of California, Los Angeles researchers have developed an application designed to increase the security and privacy of people using online social networks such as Facebook. DiscoverFriends is a Wi-Fi-based app that bypasses the servers of an online social network to enable users to communicate with their friends. Special features such as location-based services enable online social networks to track an individual's activities. The app uses a Bloom filter-based approach with hybrid encryption and stronger security. DiscoverFriends can set up communication between friends in the same local Wi-Fi, and make it possible for online social network users to send multi-hop text messages and other data without having to go through the social network's server. The app also gives users a mechanism to check-in their location anonymously on an online social network by using a new cryptographic primitive, Function Secret Sharing, which puts up a strong defense against traffic-analysis attacks by blocking user ID information.


What Readers Think About Computer-Generated Texts
Ludwig Maximilians University of Munich (05/03/16)

A study conducted by researchers at Ludwig Maximilians University of Munich (LMU Munich) found readers like to read texts generated by computers, in particular when they are unaware what they are reading was created using an algorithm. In the study, 986 subjects were asked to read and evaluate online news stories, and articles which the participants thought to have been written by journalists were consistently given higher marks for readability, credibility, and journalistic expertise than those deemed computer-generated. LMU Munich's Andreas Graefe and Hans-Bernd Brosius and colleagues selected two texts from the online editions of popular German news outlets. One was a report of a soccer match, the other was devoted to the market performance of shares issued by an automotive supplier. They also used an algorithm developed at the Fraunhofer Institute for Communication, Information Processing and Ergonomics to generate texts on the same subjects. "The automatically generated texts are full of facts and figures--and the figures are listed to two decimal places," says LMU Munich's Mario Haim. "We believe that this impression of precision strongly contributes to the perception that they are more trustworthy." However, with respect to readability, readers always rated articles attributed to real journalists more favorably, even when the attribution was deliberately false.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe