Association for Computing Machinery
Welcome to the July 29, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Is Tech Racist? The Fight Back Against Digital Discrimination
New Scientist (07/27/16) Aviva Rutkin

Racial discrimination appears to be rife among some of the most popular computer applications and algorithms, attributable either to user bias unwittingly infecting the apps or designers overlooking entire groups of potential customers. Some of the most notable examples involve prejudices baked directly into code, concealed by a patina of mathematical accuracy. For example, there are widespread complaints of U.S. courts' use of a sentencing algorithm that predicts greater recidivism among black offenders than white. In some cases bias arises from the data the software is given to work with, although the University of Michigan Law School's Sonja Starr says race does not have to be explicitly included to produce a racially biased result. She notes algorithms such as the sentencing software often factor in variables that are correlated with race. University of California, Davis professor Anupam Chander sees the frequent embedding of discrimination within data, thus making the resulting technology an agent for the viral proliferation of prejudice. Many people think technology designers have a duty to practice what Chander calls "algorithmic affirmative action" by explicitly focusing on race in the underlying data and in algorithmic results, and taking remedial action when necessary.
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 


First Major Database of Non-Native English
MIT News (07/29/16) Larry Hardesty

Researchers at the Massachusetts Institute of Technology (MIT) have issued the first major database of fully annotated English sentences written by non-native English speakers, in the hope it could inform the design of applications to enhance how computers manage the spoken or written language of non-native speakers. More than 5,100 sentences taken from exam essays written by English as a second language (ESL) students constitute the database, with each sentence featuring at least one grammatical error. The system was trained for weeks on annotating both correct and error-ridden sentences, after which the researchers mapped the syntactic relationships between the words in both the corrected and uncorrected versions using Universal Dependency formalism. MIT graduate student Yevgeni Berzak says most writers or speakers of English are non-native speakers, a fact that "is often overlooked when we study English scientifically or when we do natural language processing for English." Berzak notes such machine learning-based systems seek patterns in training data that is only written in standard English. He says systems trained on non-standard English could be more capable of handling non-native speakers' linguistic quirks. Uppsala University professor Joakim Nivre says annotating both corrected and uncorrected sentences "could be cast as a machine-translation task, where the system learns to translate from ESL to English."


16 Wild Research Experiments That Could Change Design
Co.Design (07/25/16) Mark Wilson

This week's ACM SIGGRAPH 2016 conference in Anaheim, CA, featured 119 technical papers describing projects that could transform design. Among them is a process developed by Disney Research for three-dimensionally (3D) printing meshed structures of varying density. Another notable 3D printing method is CofiFab, or coarse-to-fine fabrication of large objects from a two-dimensional (2D) laser-cut base by layering segments. Also highlighted is a system from the Institute of Science and Technology Austria that converts 3D models into a wire frame that offers stability when the schematics are fed into a wire bender. Physics-driven pattern adjustment for direct 3D garment editing is described as a system that can be used to attire a virtual mannequin with virtual fabric, after which it produces a sewable 2D cloth pattern. Researchers also detailed a technology for scanning categories of objects to find operational similarities that designers could use to create versatile "functional hybrids," while a similar project produces "saliency maps" of objects from an analysis of 3D files to highlight places where touch-friendly materials could be incorporated. An artificial intelligence trained to identify 12,500 objects contained in more than 750,000 sketches could represent a breakthrough for computer vision, while a new algorithm that can render generalized information as street or office schematics could enhance automated urban planning.


Revolutionary Web Browser Lets You Lead a Smarter Life When You Get a HAT
Engineering & Physical Sciences Research Council (07/27/16)

The public launch this month of the University of Warwick's RUMPEL hyperdata Web browser will enable users to browse their own private and secure "personal data wardrobe" known as a Hub-of-all-Things (HAT). HAT compiles online data about them and lets them control, combine, and share it in whatever manner they see fit. Warwick professor Irene Ng says the browser is designed to help people "claim their data from the Internet. The aim of RUMPEL is to empower users and enable them to be served by the ocean of data about them that's stored in all kinds of places online, so that it benefits them and not just the businesses and organizations that harvest it." The development of RUMPEL is part of the HAT initiative funded by the Engineering and Physical Sciences Research Council of the U.K. The researchers also plan to add automated and personalized suggestions, prompts, and reminders based on users' needs, habits, and lifestyles to RUMPEL. "We want to get thousands of people all over the world to try out RUMPEL and experience for themselves how it can help them make better decisions, save them time, and save them money by exchanging their personal data in a privacy-preserving manner," Ng says. "We hope this initial rollout is just the first step in a process that puts people right at the heart of the Internet in future."


A Research Project Coordinated by UC3M Helps Reduce the Cost of Parallel Computing
Carlos III University of Madrid (Spain) (07/29/16)

Europe's Reengineering and Enabling Performance and poweR of Applications (REPARA) research project, coordinated by Carlos III University of Madrid (UC3M), is almost complete. REPARA was launched to improve parallel computing applications for lowering costs, upgrading performance, enhancing energy efficiency, and enabling source code maintenance. "We hope to help transform code so that it can be run in heterogeneous parallel platforms with multiple graphic cards and reconfigurable hardware," says UC3M professor and project coordinator Jose Daniel Garcia. He notes the project's semiautomatic computation process allows improvements to be engineered within days instead of months. Making the energy and performance benefits of these systems available to users without the arduous development efforts such architecture typically demands is the goal of the REPARA project. Achieving this involves source code "refactoring," by which the internal structure of a program is improved without changing its observable behavior. Garcia says in this way, the app's performance, energy efficiency, and easy maintainability of the source code are augmented. He notes the researchers have devised and registered three technological products they may commercialize with a European firm. "These software products can help developers to offer engineering services to third parties by simplifying the development process," Garcia says.


Symposium on Accelerating Science: A Grand Challenge for AI
CCC Blog (07/28/16) Vasant G. Honavar

The argument that the emergence of big data heralds the obsolescence of the time-honored scientific method is insupportable, as its advent in fact widens the gap between our ability to obtain, store, and process data and our ability to effectively use the data to facilitate discovery, writes Pennsylvania State University professor Vasant G. Honavar. He says what is needed to accelerate science to keep up with the rate of data acquisition and processing is the development of a suite of computational lenses, such as algorithmic or information processing abstractions, combined with formal tools and techniques for modeling and simulating natural processes. Moreover, researchers need cognitive tools, which demands the formalization, development, and analysis of algorithmic or information processing abstractions of various facets of the scientific process, as well as the creation of computational artifacts that embody such understanding. Honavar says the integration of the cognitive tools into collaborative human-machine systems and infrastructure to advance science also are needed. He says a grand challenge for artificial intelligence (AI) is the development of these tools, as it calls for basic, integrative, and coordinated advances across all subfields of AI, such as perception, knowledge representation, automated inference, information integration, machine learning, natural language processing, planning, decision-making, distributed problem-solving, robotics, and human-human and human-machine communication, interaction, and coordination.


Machines v. Hackers: Cybersecurity's Artificial Intelligence Future
The Christian Science Monitor (07/25/16) Paul F. Roberts

Experts predict machines will perform increasingly complex cybersecurity operations over time, causing demand for human analysts to further decline and facilitating a paradigm shift in cybersecurity. SparkCognition CEO Amir Husain notes artificial intelligence is already being employed to secure information with tasks such as file analysis, while computers are capable of performing many of the response functions currently handled by people, only much faster. The U.S. Defense Advanced Research Projects Agency (DARPA) next month will host the first-ever hacking contest pitting automated supercomputers against each other. DARPA's Mike Walker says the competition seeks to build impetus for the construction of "autonomous systems that can arrive at their own insights, do their own analysis, make their own risk equity decisions of when to patch and how to manage that process." Meanwhile, technology companies are working toward the same objective, with IBM in May announcing plans to teach a cloud-based version of its Watson cognitive technology to spot cyberattacks and computer crimes. Experts note much of cybersecurity work entails extracting insights from a vast corpus of unimportant data. "You're looking around your infrastructure and studying [network traffic] for machines that are talking to some [Internet] address or region that your network hasn't talked to before," says the SANS Institute's John Pescatore.


Violent Groups Revealed on Twitter: Tool Keeps Track Without Having to Geolocate the Tweets
Spanish Foundation for Science and Technology (07/26/16)

Sentiment analysis algorithms developed by researchers at the University of Salamanca (USAL) in Spain are able to conduct semantic analyses of Twitter posts in order to identify and study violent groups. Hybrid neural-symbolic artificial intelligence systems and multiple algorithms were used to develop the tool, which the team believes could help law enforcement track threats and potentially dangerous situations. The application can understand sentiments in six different languages and study changes in individual sentiments, physical location, and group relationships. "It can establish where a dangerous user is located with reasonable precision, based on what they share on Twitter and how and with whom they are connecting at any time, without the need of geolocating tweets," says USAL professor Juan Manuel Corchado. He notes the tool also can be used to identify members and leaders of a group and track its evolution. A prototype has been developed, and Spanish police already have expressed interest in the final product, according to Corchado. He also says the application could be used prevent bullying and racism, as well as to analyze consumer sentiment about companies, brands, and individuals on social media.


Selfie Righteous: New Tool Corrects Angles and Distances in Portraits
Princeton University (07/27/16) Adam Hadhazy

Princeton University researchers have developed an editing tool that can correct distortions in self-portrait photographs by manipulating a digital image to make a subject's face appear as if it were photographed from a longer distance or a different angle. The researchers developed a model that generates digital, three-dimensional (3D) heads combined with a program that identifies more than 70 facial reference points. The 3D head is adjusted to correspond with the points corresponding to a two-dimensional image, so a selfie's facial reference points then can be modified to approximate changes in the 3D orientation. The synthetic image looks realistic because the exact pixel colors from the original images are still present. Before pursuing commercial development, researchers will focus on perfecting the tool's adjustment of hair, which often looks contorted in the synthetic images because of varied hair texture and color. Moreover, body features that are not visible in the original picture would appear to be missing or distorted in the altered pose. "As humans, we have evolved to be very sensitive to subtle cues in other people's faces, so any artifacts or glitches in synthesized imagery tend to really jump out," notes Princeton professor Adam Finkelstein. The work was presented this week at the ACM SIGGRAPH 2016 conference in Anaheim, CA.


NUS in Quest to Create Next-Generation 'Quantum Music'
The Straits Times (07/25/16) Lin Yangchen

High-frequency vibrations produced by cooled atoms can be translated into musical sounds audible to humans, as shown by a large-scale project funded by the European Union's Creative Europe grant. The National University of Singapore's (NUS) Center for Quantum Technologies (CQT) sourced the quantum music from atoms of rubidium, sodium, and other elements cooled to extremely low temperatures, at which point the atoms vibrate at frequencies exceeding 50 MHz. These frequencies are above the audible range for humans, but CQT researcher Andrew Garner developed software to translate the vibrations into the audible frequency range. The cooling experiments and the computational translations of the data currently must be done separately, but the CQT's goal is to create the music in real time. The project's researchers are looking toward the methodology of CQT professor Alexander Ling, who used photons of light to generate noise-resistant quantum effects, for inspiration. Ling was able to drastically shrink the rig to a weight of 100 grams, and eventually the researchers want to hold a concert for the Singapore public featuring the quantum music. "The intermingling of ideas from art and science is necessary in the modern world," says Santha Bhaskar, an artistic director at NUS Center for the Arts.


Computer Scientist Develops Smart App for Growers
Capital Press (07/22/16) Tim Hearden

A digital application called SmartFarm, developed by University of California, Santa Barbara professor Chandra Krintz, aims to help growers monitor real-time field conditions and weather patterns. Krintz says the phone or tablet app collects data from small sensors around individual plants, providing growers valuable information about soil health and irrigation needs. SmartFarm then compiles the data with weather forecasts to create a complete environmental profile and predict when growers need to take action to prevent damage from frost or other weather conditions. Krintz's team currently is testing the system on an experimental farm in Santa Barbara and collaborating with about 20 growers throughout the state. Krintz notes SmartFarm technology is provided free to growers and the required hardware will be inexpensive, while by the end of the year the software will be available online for people to try. Krintz says the addition of data analytics to agricultural strategies can solve many of the problems faced by growers. "We have to produce enough food to feed 9 billion people by 2050, and 7 billion people today," she notes. "We think automation and computing can really simplify what farmers do today."
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Plumbing the Possibilities of 'Seeing Around Corners'
University of Wisconsin-Madison (07/21/16) Brian Mattmiller

Researchers from the Morgridge Institute for Research and the University of Wisconsin-Madison (UW-Madison) are exploring the possibilities of using scattered-light technology to recreate images hidden from a human line of sight. Although typical cameras rely on an initial burst of light on the subject in view, the joint project focuses on the indirect light that scatters through the scene. The technology recaptures these pulses of scattered photons through finely tuned sensors. The data collected from the sensors is then used to digitally rebuild a three-dimensional (3D), unobstructed scene. Morgridge researcher Andreas Velten originally pioneered and demonstrated the technology in 2012 to recreate human figures and other shapes from around corners. Velten and UW-Madison professor Mohit Gupta are now pushing the limits of their imaging technology to see if they can recapture movement, determine an object's composition, and differentiate between similar shapes. A theoretical framework was developed to study the effect of bouncing light several times through a space to better capture scenes out of sight. "The more times you can bounce this light within a scene, the more possible data you can collect," Velten says. "Since the first light is the strongest, and each proceeding bounce gets weaker and weaker, the sensor has to be sensitive enough to capture even a few photons of light."


Using Models in Developing Software for Self-Driving Cars
InfoQ (07/28/16) Ben Linders

In an interview, University of Arizona professor Jonathan Sprinkle says software for self-driving vehicles is shifting from a monolithic to a composable model. Moreover, the sensor function technique to merge the various data into a static map is moving toward composable perception in which the car can infer possible movement of things in its environment. "While functional behaviors may be [fairly] easy to test, nonfunctional behaviors rarely compose--and these behaviors are the ones that must be guaranteed for complex cyber-physical systems that interact with humans," Sprinkle notes. He says software modeling offers a method for reasoning about system behavior without regard to exact inputs, and most autonomous capabilities are being deployed on a task-by-task basis. Sprinkle notes reactive sequence models serve best for planning and control software. "In the case of control, most software can be modeled as a component [functional block] where inputs are transformed into outputs," he says. As an example of how he uses test data for validating that self-driving car software is operating properly, Sprinkle cites checking "whether [when driving under human control], if the autonomous controllers make reasonable decisions with respect to, say, desired velocity."


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe