Association for Computing Machinery
Welcome to the July 25, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).


Machine Vision's Achilles' Heel Revealed by Google Brain Researchers
Technology Review (07/22/16)

Machine-vision algorithms have a weakness that enables them to be deceived by images modified in ways humans could easily detect, according to Google Brain and Open AI researchers. "An adversarial example for the face recognition domain might consist of very subtle markings applied to a person's face, so that a human observer would recognize their identity correctly, but a machine learning system would recognize them as being a different person," the researchers note. Their efforts to systematically study adversarial images has uncovered machine-vision systems' vulnerability. The team begins with ImageNet, a database of images classified according to what they display; a standard test involves training a machine vision algorithm on part of this database and then assessing how well it classifies another part of the database. The team developed an adversarial image database by modifying 50,000 pictures from ImageNet in three distinct ways. One algorithm makes small changes in images to maximize cross entropy, while another alters the image further via iteration. The third algorithm alters an image so it steers the machine vision system toward a specific misclassification. Testing Google's Inception v3 algorithm's performance in classifying these images found the first two methods lower its top 5 and top 1 accuracy substantially, but the third algorithm cuts accuracy to 0 for all of the images.

Computer Scientists Find Way to Make All That Glitters More Realistic in Computer Graphics
University of California, San Diego (07/21/16) Ioana Patringenaru

An algorithm developed by University of California, San Diego professor Ravi Ramamoorthi and colleagues promises to make the surfaces of a wide range of materials look a lot more realistic. Ramamoorthi says the method improves how computer graphics software reproduces the way light interacts with extremely small details, called glints, on the surface of materials, including metallic car paints, metal finishes for electronics, and injection-molded plastic finishes. The research team says the approach is 100 times faster than the current state of the art. The researchers' solution was to break down each pixel of an uneven, intricate surface into pieces covered by thousands of microfacets--light-reflecting points smaller than a pixel. Ramamoorthi says the algorithm's speed is based on its ability to approximate this normal distribution at each surface location, called a "position-normal distribution." The method requires minimal computational resources and can be used in animations, while the team notes current methods can only reproduce these glints in stills. The researchers will present the work this week at the ACM SIGGRAPH 2016 conference in Anaheim, CA.

Researcher Proposes Social Emotions Test for Artificial Intelligence
National Research Nuclear University (07/25/16)

Recent brain studies point to a need to instill emotional responsiveness within artificial intelligence (AI) to make it truly human-like, and professor Alexei Samsonovich with the Moscow Engineering Physics Institute National Research Nuclear University proposes using a computer game as the template for an AI test. The game involves both a program and a person manipulating virtual people on a computer display, using actions with emotional content. The players interact in different types of social relationships, such as mutual trust, subordination, and leadership. By giving the machine an emotional edge over the average human player, Samsonovich believes players will want to help the machine out of traps first. In addition, multiple behavioral parameters of the player and the machine will be calculated in the course of the game, defining both participants' inner worlds and yielding machine parameters that are statistically identical to human behavior. Samsonovich envisions his AI test being one element of a challenge to construct an artificial brain that replicates the precepts and mechanisms of emotional awareness in humans. He says this brain would successfully pass the emotion test and be accepted by people as having emotional experience. "That implies such mechanisms as narrative thinking, autonomous goal setting, creative reinterpreting, active learning, and the ability to generate emotions and maintain interpersonal relationships," Samsonovich says.

Mars Rover Uses AI to Decide What to Zap With a Laser
Computerworld (07/22/16) Sharon Gaudin

The U.S. National Aeronautics and Space Administration (NASA) last week announced its Mars rover Curiosity can now decide what targets it should hit with a laser or take photos of without human intervention, thanks to artificial intelligence (AI) software developed by the Jet Propulsion Laboratory. Following the Autonomous Exploration for Gathering Increased Science (AEGIS) software's upload, the rover can choose scientifically interesting rocks and regions by itself. "This autonomy is particularly useful at times when getting the science team in the loop is difficult or impossible--in the middle of a long drive, perhaps, or when the schedules of Earth, Mars, and spacecraft activities lead to delays in sharing information between the planets," notes NASA's Tara Estlin. AEGIS previously was employed by the rover Opportunity, operating on Mars for the last 12 years. NASA says Curiosity's use of AEGIS will entail analyzing images normally captured at the end of each trek, using the vehicle's navigation camera. AEGIS uses criteria such as size, shape, and brightness, uploaded by scientists, to determine if the rover should use its laser and telescopic camera to probe something of interest. Estlin says the AI system also directs Curiosity's ChemCam instruments in their work.

Error Fix for Long-Lived Qubits Brings Quantum Computers Nearer
New Scientist (07/20/16) Jacob Aron

Researchers at Yale University have achieved a 20-fold increase in quantum bit (qubit) lifetime. The researchers were able to store the data for 320 microseconds, marking the first time quantum error correction (QEC) reached the "break-even" point, according to Yale professor Rob Schoelkopf. The researchers built their QEC system to be as simple as possible, which reduces the chances of errors creeping in. Schoelkopf says the system consists of a superconductor inside an aluminum box, with a chamber on either side. One chamber is filled with microwave photons, which are linked to the superconductor, while the other is used to read from and write data to the superconducting qubit; the two chambers together encode a single quantum bit of information. Schoelkopf says the system relies on the number of photons in the first chamber being even, so an error is introduced if one photon is lost, which happens naturally over time. The team uses quantum-state tomography to check the evenness or oddness of the photons. The technique allows for a quick look at a quantum object without destroying it, and with continuous monitoring the researchers were able to count the number of errors and compensate for them when finally reading out the data from the superconducting qubit, according to Schoelkopf.

Artificial Intelligence Camp Bridges STEM Gender Gap
Stanford Daily (07/25/16) Vivian Chaing

The Stanford Artificial Intelligence Outreach Summer (SAILORS) was created last summer by Stanford University professor Fei-Fei Li and postdoctoral researcher Olga Russakovsky. The program brings together high school girls from 20 U.S. states and three countries to listen to lectures and conduct research with faculty in Stanford's Artificial Intelligence Lab. "SAILORS is built on the hypothesis that a humanistic mission statement would attract more diverse students," Li says. "In turn, their values and perspectives are injected into the technology that will impact our society." Women hold less than 25 percent of science, technology, engineering, and math (STEM) jobs and women majoring in STEM are less likely to end up working in STEM fields compared with their male counterparts, according to the U.S. Department of Commerce. The department attributes these trends to a "lack of female role models, gender stereotyping, and less family-friendly flexibility in the STEM fields." The SAILORS participants listened and took notes during a lecture on computer vision in the morning, and participated in team bonding activities before lunch. Li says the lectures and research projects provide an engaging and comfortable environment for the girls to explore their interests.

Adaptive Rendering Method Reduces Discolored Pixels in Photo-Realistic Images
EurekAlert (07/20/16) Jennifer Liu

Disney researchers have developed a method for improving the rendering of high-quality images from three-dimensional (3D) models by significantly reducing the noise contained in animated images while preserving fine detail. The researchers found they could improve the performance of a popular technique for producing photo-realistic animations, called Monte Carlo ray tracing, by varying the polynomial functions used to control image reconstruction based on the complexity of each region of the image. "Our new method outperforms existing state-of-the-art denoising techniques in terms of both numerical accuracy and visual quality," says Disney Research's Markus Gross. Monte Carlo ray tracing renders 3D scenes by randomly tracing the possible light paths for each pixel in the image. Denoising algorithms are used to filter out as much noise as possible during image reconstruction. The Disney researchers found they could reduce noise by choosing the most appropriate polynomial function for each region within the image. "The main observations of this new work are that polynomial functions can accurately approximate small image regions of varying complexity and, further, that automatically choosing the correct polynomial function for each region is important," says Edinburgh Napier University professor Kenny Mitchell. The researchers presented their method this week at the ACM SIGGRAPH 2016 conference in Anaheim, CA.

Artificial Muscle for Soft Robotics: Low Voltage, High Hopes
Harvard University (07/20/16) Leah Burrows

A dielectric elastomer developed by Harvard University researchers could potentially be used as artificial muscles that move soft robots. The team combined two known materials that worked well individually--an elastomer based on one developed at the University of California, Los Angeles that eliminated the need for rigid components, and an electrode of carbon nanotubes developed in the Clarke Lab. The nanotubes neither increase the stiffness of the elastomer nor decrease the energy density, meaning the elastomer can still stretch and provide significant force. The team fabricated the elastomers one on top of the other, creating a multilayer sandwich of elastomer, electrode, elastomer, electrode, and so on. In this way, each electrode gets double usage, powering the elastomer above and below, according to the researchers. They note the new device outperforms standard dielectric elastomer actuators. The technology could be used in everything from wearable devices to soft grippers, laparoscopic surgical tools, entirely soft robots, or artificial muscles in more complex robotics. "We think this has the potential to be the holy grail of soft robotics," says Harvard graduate student Mishu Duduta.

Scientists Glimpse Inner Workings of Atomically Thin Transistors
UT News (07/18/16) Christine Sinatra

A team of physicists at the University of Texas at Austin (UT-Austin) says it has had the first-ever glimpse into molybdenum disulfide, an atomically thin new semiconducting material that would allow for on-off signaling on a single flat plane. Using two-dimensional (2D) transistors, the team found electrical currents move in a more phased way than in silicon transistors, beginning first at the edges before appearing in the interior. UT-Austin professor Keji Lai says this suggests the same current could be sent with less power and in an even smaller space, using a one-dimensional edge instead of the two-dimensional plane. He notes this could promote future energy savings in electronic devices. "In the future, if we can engineer this material very carefully, then these edges can carry the full current," Lai says. "We don't really need the entire thing, because the interior is useless. Just having the edges running to get a current working would substantially reduce the power loss." Researchers have been trying to learn what happens inside a 2D transistor for years, which is key to understanding the potential uses of the material, which could include paper-thin computers and cellphones.

Can Robots Recognize Faces Even Under Backlighting?
Toyohashi University of Technology (07/19/16)

Researchers at the Toyohashi University of Technology have developed a technique to adaptively adjust the effect of lighting on human faces by utilizing an extended reflectance model. They say the model has one variable, the illumination ratio, which is controlled by a fuzzy inference system (FIS). The FIS rule was optimized using a genetic algorithm in order to handle the vast variety of illumination conditions, the researchers note. "By just adding this contrast adjustment to present face-recognition systems, we can largely improve the accuracy and performance of face detection and recognition," says Toyohashi professor Jun Miura. In addition, he says this adjustment runs in real time, making it appropriate for real-time applications such as robotics and human-interaction systems. A face provides a person's identity, as well as other information, such as a person's focus of attention and the degree of tiredness. The researchers say this information could be useful for a comfortable human-machine interaction. The proposed contrast adjustment method also could be useful in various situations, especially under severe illumination conditions, according to the researchers.

Researchers Invent 'Smart' Thread That Collects Diagnostic Data When Sutured Into Tissue
Tufts Now (07/18/16) Patrick Collins

Tufts University researchers have integrated nanoscale sensors, electronics, and microfluidics into cotton and synthetic threads that can be sutured through multiple layers of tissue to wirelessly gather diagnostic data in real time. The team used a variety of conductive threads, which were dipped in physical and chemical sensing compounds and connected to wireless electronic circuitry, to create a flexible platform they sutured into tissue in rats as well as in vitro. The threads collected data on tissue health, pH, and glucose levels, which can be used to determine such things as how a wound is healing, whether infection is emerging, or whether the body's chemistry is out of balance. The results were wirelessly transmitted to a cellphone and computer. The three-dimensional platform is able to conform to complex structures such as organs, wounds, or orthopedic implants. The researchers think the advance raises the possibility of optimizing patient-specific treatments. "We think thread-based devices could potentially be used as smart sutures for surgical implants, smart bandages to monitor wound healing, or integrated with textile or fabric as personalized health monitors and point-of-care diagnostics," says Sameer Sonkusale, director of Tufts' interdisciplinary Nano Lab.

Federal Grant Supports UTSA Research in Espionage Prevention
USTA Today (07/18/16) Joanna Carver

Researchers at the University of Texas at San Antonio (UTSA) have received a $649,172 grant from the U.S. Department of Homeland Security to strengthen insider threat detection. "The ability to detect threats within an organization and to keep sensitive information from getting into the wrong hands has become vital to national security," says Nicole Beebe, director of UTSA's Center for Education and Research in Information. The research will involve building an insider threat-detection system to prepare for real-world situations in which a disgruntled employee or corporate spy could steal valuable information. "The goal is to be able to detect an insider threat before that person commits their crimes," says UTSA professor Daijin Ko. Most organizations have protocols to detect these kinds of incidents, but there are several other factors that could signal an information breach that are often overlooked. Beebe and Ko say in order to close this gap, they will detect digital forensic traces that can be used to signal a possible insider threat. "Essentially, we're watching for an outlier based on how long they're using the computer, when they are using it, and how they are using it, among other variables," Ko says.

Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.

To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe