Association for Computing Machinery
Welcome to the July 27, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).


Connecting Artificial Intelligence With the Internet of Things
The Guardian (07/24/15) Andy Meek

Oblong Industries CEO John Underkoffler does not believe a connection between the Internet of Things (IoT) and artificial intelligence (AI) will become a threat to mankind, provided basic questions are answered prior to considering imposing constraints on AI capabilities. He notes the IoT will require an entirely new user interface, and asks "how do we talk to all these objects in a coherent way? That's a really great design problem." Following that, Underkoffler says "a minimal Bill of Rights" should, at the very least, be attached to the IoT. Underkoffler and colleagues believe excessive worry about AI going out of control and commandeering the IoT to inflict harm is causing people to ask the wrong questions. For example, Massachusetts Institute of Technology professor Sanjay Sharma warns a more insidious threat than a super-intelligent AI is the potential for "AI stupidity," in which "we will build artificial intelligence systems that are too smart by half, where they do something really dumb--for example, a cascading series of events that results in a power shutdown." Sharma also thinks the hacking of a flawed IoT system to cause widescale damage is inevitable, given that "we don't have a clear architectural understanding of what the IoT is."

UTS Data Arena: How Raw Data Transforms Into 3D, 360-Degree Visualization
CIO Australia (07/22/15) Rebecca Merrett

Software developers at the University of Technology, Sydney (UTS) have developed a three-dimensional, 360-degree data visualization room. Called the Data Arena, the visualization room consists of six projectors placed around curved walls that form a round drum shape. UTS Center for Autonomous Systems professor Jaime Valls Miro is using the Data Arena to assess the condition of water pipes to predict when they are likely to break, and says the visualization room provides a better understanding of what is happening because raw data alone can lead to a limited interpretation. Miro can rotate, turn, and zoom in and out to see the fine details of cracks and holes that have formed in the pipe. The Data Arena is supported by nine NVIDIA Quadro K6000 graphics processing units (GPUs) with 27,000 CUDA parallel processing cores, and uses the Equalizer open source program for parallel rendering and work load balancing. Ben Simons, one of Data Arena's developers, says the use of Houdini software, which is employed for visual effects in feature films, is what makes the visualization room unique. "What we are doing is we are taking that silo of capability in the visual effects industry and the high-performance computing, real-time GPU and we are building a bridge between the two," Simons says. "And that bridge is unique in our Data Arena, no one else is doing that."

The Rise of Computer-Aided Explanation
Quanta Magazine (07/23/15) Michael Nielsen

Important strides in the form of computer language translation and computer-generated proofs are driving progress in computer-aided explanation, writes Recurse Center computer scientist Michael Nielsen. He says statistical models in computer translation offer circumstantial explanations of translations, while computer-assisted proofs could arguably be defined as computer-generated explanations of mathematical theorems. In the second instance, Nielsen notes in mathematics, a proof is not only a justification for a mathematical outcome, but an explanation of why an outcome is true. "Thus, we can view both statistical translation and computer-assisted proofs as instances of a much more general phenomenon: the rise of computer-assisted explanation," he contends. "Such explanations are becoming increasingly important, not just in linguistics and mathematics, but in nearly all areas of human knowledge." Nielsen acknowledges these explanations may not be fully satisfying, or real explanations. He cites the views of skeptics who say these computer methods do not provide the kind of insight an orthodox approach can yield. Nielsen argues the optimal approach should be to give due consideration to both the objections and the computer-assisted explanations.

Deep Neural Nets Can Now Recognize Your Face in Thermal Images
Technology Review (07/24/15)

One of the issues with infrared surveillance videos is it can be difficult to recognize the individuals they capture, because a face seen in the infrared light emitted by the body looks different than that same face seen in the visible light reflected off of it. However, researchers at the Karlsruhe Institute of Technology in Germany have developed a method of matching infrared images and visible light images of the same face. Saquib Sarfraz and Rainer Stiefelhagen developed a deep neural net that they trained to match infrared and visible-light images of faces using a database of 4,585 infrared and visible light images of 82 people created by the University of Notre Dame. The database included images of the participants wearing different expressions and images taken on different days to mirror the ways people's faces change over time and under different circumstances. Once the deep neural network was trained, it was able to match infrared and visible light faces with 80-percent accuracy, provided it had multiple visible light images against which to compare a thermal image. Accuracy fell to 55 percent if comparisons were one on one. The researchers say improving the system will require assembling a much larger database and building a more powerful network.

Ulster University Scientists Develop App With U.S. Colleagues that Could Prevent Onset of Alzheimer's
Belfast Live (07/23/15) Sarah Scott

A new application unveiled last week at the Alzheimer's Association International Conference in Washington, D.C., could help prevent the onset of Alzheimer's disease. Developed by scientists at Ulster University in collaboration with Utah State University, the Gray Matters smartphone app is designed to educate and empower people to make positive lifestyle adjustments. Users can set lifestyle goals such as exercise, nutrition, stress management, and brain management, which are known to impact the onset and progression of the disease. The app enables users to track their lifestyle in areas ranging from diet and physical activity to mental well-being and social engagement. Moreover, the app provides daily facts linking healthy lifestyle behaviors and improved cognitive well-being, and also provides visual progress reports. Research findings show the app leads to "increases in intrinsic motivation for, and actual achieved changes in, health-related behaviors, with accompanying reductions in subjective memory complaints," says Utah State professor Maria Norton. "Secondary outcomes of interest are physical and cognitive health indicators including body mass index, blood pressure, blood-based biomarkers, and cognitive test scores." The researchers plan to make Gray Matters available to the general public in the future.

Artificial Intelligence Expert Likens AI Dangers to Nuclear Weapons
Naked Security (07/24/15) Mark Stockley

In an interview, University of California professor Stuart Russell, 2005 recipient of the ACM Karl V. Karlstrom Outstanding Educator Award, says the risks of artificial intelligence (AI) research are as grave as those of nuclear technology. In particular, he stresses the fundamental danger of "explicit or implicit value misalignment--AI systems given objectives that don't take into account all the elements that humans care about." Among the scenarios Russell envisions causing such misalignment are competition between nations or companies seeking a super-technological edge, or subtler "slow-boiled frog" evolution that leaves humans dependent and weak. Russell and hundreds of other AI experts signed an open letter in January urging research to "maximize the societal benefit of AI" instead of focusing on all possible applications. Refutation of their fears by Linux founder Linus Torvalds and others does not sway Russell, who believes human values and goals should be the core priority of AI technology development. He says adopting this perspective now is critical, because although AI that surpasses human intelligence is not likely to arrive soon, it presents other, more immediate dangers. He cites as one example AI-enabled lethal autonomous weapons systems, or robots that can acquire and destroy targets without human oversight.

Web App Helps Researchers Explore Cancer Genetics
News from Brown (07/23/15) Kevin Stacey

Brown University researchers have developed Mutation Annotation and Genome Interpretation (MAGI), an interactive tool designed to help researchers and clinicians explore the genetic underpinnings of cancer. MAGI is an open source Web application that enables users to search, visualize, and annotate large public cancer genetics datasets. "MAGI lets users explore these data in a regular Web browser and with no computational expertise required," says Brown Ph.D. student Max Leiserson, who led the development of the tool. MAGI also enables researchers to upload data they may have collected on their own and compare the findings to those in the larger databases. Researchers that upload their data to MAGI can leverage the large public datasets to help interpret their own data. The MAGI project started as a means of looking at the output from algorithms that comb through large genome datasets, helping to pick out the mutations important to cancer development and distinguishing them from benign mutations. "As we were developing tools to visualize our own results, we realized that other researchers might also find these tools useful," says Ben Raphael, director of Brown's Center for Computational and Molecular Biology. The lab is making MAGI available for free, with the hope that many in the cancer genomics community will take advantage of it.

Computers Can Now Replicate Handwriting
Tom's Hardware Guide (07/22/15) Kevin Carbotte

A Google Research article on University of Toronto fellow Alex Graves' work on long short-term memory (LSTM) recurrent neural networks cited a demonstration of these networks' ability to produce discrete and real-valued sequences with complex, long-range structure via next-step prediction. The method lends itself to handwriting synthesis that can replicate a writer's style by predicting one data point at a time, with the provision of some arbitrary text. The program Graves developed to illustrate this application employs a set of writing examples from different people, and it can reproduce any phrase in the selected style of writing, with an unusual degree of accuracy. The tool currently has five distinct examples of writing style, and it enables users to type up to 100 characters to convert.

Cyborg Cockroach and Drone Teams Can Locate Disaster Survivors
New Scientist (07/22/15) Hal Hodson

Researchers at North Carolina State University have developed cyborg cockroaches to serve as search-and-rescue scouts during disasters. The cyber insects have tiny electrodes implanted to serve as an electronic bridle. "This stimulates the antennae, which they use to understand their physical environment," says team leader Alper Bozkurt. The cyber insects would collectively rely on a drone to beam an invisible radio "fence" for them to search. In addition, sensor backpacks network them together and enable data gathered to be relayed to the cyber insect closest to the drone for uploading. Some of the cyber insects would use low-resolution directional microphones to listen for sounds as they move around the disaster area. Infrared sensors can help them find warm bodies, but they also can use propane sensors to detect gas leaks and a Geiger counter to assess radioactivity, according to Bozkurt. The team has run simulations, but plans to perform real-world tests in the next two months.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 

Hitachi Developed Basic Artificial Intelligence Technology That Enables Logical Dialogue
The FINANCIAL (07/22/15)

Hitachi announced it has developed artificial intelligence technology that can analyze huge volumes of text and determine whether the topics they are about have a positive or negative relationship to values such as health, economics, and public safety. The technology grew out of a system developed by Hitachi in 2014 for analyzing medical records. That system was able to extract specified information, such as an illness or the affected area, from electronic medical records with a high-degree of accuracy. The new system is based on the "Value Dictionary," which organizes ideas about values such as health and economics by recording affirmative and negative opinions expressed in the documents it analyzes. For example, the Value Dictionary enables the system to understand that an article stating "noise is harmful to health" is positing a negative relationship between "noise" and the value of "health." So far the system has created approximately 250 million such correlations based on its analysis of 9.7 million news articles. Hitachi believes the technology could be useful for analyzing the contents of company documents, published reports, or medical records. The system was developed by Hitachi in conjunction with the Inui-Okazaki Laboratory at Tohoku University's Graduate School of Information Sciences.

Researchers Enlist Machine Learning in Malware Detection
Dark Reading (07/22/15) Kelly Jackson Higgins

Cylance researchers are using machine learning to improve malware detection. Cylance's Matt Wolff and Andrew Davis are training software to quickly spot and ultimately stop malware infections, using deep-learning techniques. They are training a special machine-learning tool module on legitimate and malicious files to teach the application the difference between the two. The algorithm employs static analysis of a piece of code to quickly spot malware in a file that it has never seen before. "We don't run [the malware], so the malware doesn't have a chance," Wolff says. Moreover, the researchers note the approach is faster than sandboxing and analyzing malware. Machine/deep learning is particularly helpful in staying atop the increasingly polymorphic nature of malware. "If a malware author two months later comes up with a new [variant], there's a high probability the module you wrote is going to detect that," Wolff says. "It has a predictive capability." Wolff and Davis say the deep-learning system could ultimately replace existing malware detection tools, and they plan to feed the deep-learning module some malware live during a presentation at the Black Hat USA 2015 conference in August.

How Much Information Can Earth Hold?
Scientific American (08/15) Vol. 313, No. 2, P. 72 Cesar A. Hidalgo

Exploring how much information the Earth can store offers insights into the manifestation of order in the universe, in particular that information expands over time, writes Massachusetts Institute of Technology (MIT) professor Cesar A. Hidalgo, who leads the Macro Connnections group at the MIT Media Lab. Hidalgo says applying MIT professor Seth Lloyd's formula for calculating the information storage capacity of physical systems leads to the conclusion the planet can store approximately 1 trillion trillion trillion trillion gigabits, and the current volume of stored information equals only a fraction of Earth's capacity. The observation demonstrates the difficulty of generating, maintaining, and combining information, in keeping with the universe's hostility to the emergence of order. However, order still emerges, not least because of the capacity of matter to compute. Moreover, the gradual growth of information is explained by the expansion on Earth of both biomass and cultural information. Matter, and order, existed before the emergence of humans, but since then much more order has been added via the solidification of imagined objects. Contributing to the growth of information requires people forming networks that can compute products, since systems' computational capacities are finite. Humans' ability to produce information via networking is partly limited by historical, institutional, and technological forces, but hyperconnectivity enabled by technological advancement and human-machine integration should help overcome these obstacles and continue the growth of information.
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 

Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.

To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe