Association for Computing Machinery
Welcome to the April 6, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Website Seeks to Make Government Data Easier to Sift Through
The New York Times (04/04/16) Steve Lohr

The Massachusetts Institute of Technology Media Lab on April 4 announced Data USA, a project designed to make it easier for people to sift through vast troves of government information. The freely accessible website is described as "the most comprehensive visualization of U.S. public data," and it uses open source software code so developers can build custom applications by adding other data. Data USA project director and Media Lab professor Cesar A. Hidalgo says the site aims to "transform data into stories" typically rendered as graphics, charts, and written summaries. In one example, the action of typing "New York" into the search box summons a drop-down menu of choices such as the city, the metropolitan area, and the state. When a user selects an option, the page displays related images and basic statistics, while on a lower level are six icons for related subject categories. The icons link to data stories enhanced with graphics. Deloitte contributed funding and expertise to the project, and Deloitte's Patricia Buckley says the purpose of Data USA is to "organize and visualize data in a way that a lot of people think about it." Northwestern University professor Kris Hammond predicts the type of data analysis Data USA uses, in which the site makes assumptions about users and codes those assumptions within its software, will become more commonplace.


How Facebook Is Helping the Blind 'See' Pictures Their Friends Share Online
The Washington Post (04/05/16) Andrea Peterson

Facebook on Tuesday launched Automatic Alternative Text, a tool designed to enable sight-impaired users to "see" pictures posted by friends online. For this feature to work, users must have Apple's built-in screen reader turned on and select an image in the text. Facebook applies artificial intelligence (AI) algorithms to identify basic features in the image and produce a new alt text that the screen reader will share with the user. The feature will be initially rolled out for the English-language version of Facebook's main iOS, and it will only identify about 100 fundamental concepts because Facebook only wants to suggest an image contains objects its AI tech can reliably recognize, says the company's Jeffrey Wieland. Automatic Alternative Text developer Matt King says more work must be done online to improve picture accessibility, noting AI and facial-recognition data could be used to inform blind users of who the people in a photo are, instead of only how many there are. Facebook is among many tech firms and several universities in a working group founded to better prepare students to invent inclusive technology. Other technologies in development follow a similar AI approach to images as Facebook, with Microsoft recently showcasing its SeeingAI app at a developers conference.


Using Data Science to Solve Society's Problems
Baseline (04/05/16) Samuel Greengard

Developing data science solutions to address real-world societal challenges is the purpose of the Data Science Bowl, a 90-day contest launched by Booz Allen Hamilton and Kaggle in 2015. "What is compelling about the competition is that there are researchers from around the world working on a very real problem with very real benefits to society," says Booz Allen Hamilton principal Steven Mills. "Although the prize money is substantial, most people enter the competition because they are passionate about data science and making the world a better place. Many of the participants are passionate about what they do, and they are eager to contribute to society." This year's competition focused on heart disease diagnosis, which involved 1,392 algorithms submitted by 993 participants. The winning software from hedge fund analysts Qi Liu and Tencia Lee can enable real-time heart disease diagnosis from a magnetic resonance imaging scan, a milestone that could speed up scanning, cut medical costs, and facilitate new research methods. Last year's contest focused on rapidly assessing ocean health on a vast scale, and challenged entrants to examine a massive image archive. The winning algorithm from Ghent University can automatically classify more than 100,000 underwater images of plankton.


IBM Introduces Cognitive Storage
eWeek (04/04/16) Darryl K. Taft

IBM researchers have developed Cognitive Storage, a new approach to storage they say helps computers learn what they should remember and what they can forget. The concept breaks down the difference between what the human brain would view as memories and what it would view as information. The differentiation could be used to determine what is stored, where it is stored, and for how long, according to IBM. The idea is based on a metric known as data value, which is analogous to determining the value of a piece of art. To determine this value, IBM tracked the access patterns of data or the frequency with which it is used. The researchers also added metadata tags to the data to help train the system, depending on the context in which the data is used. The company's cognitive storage initiative could be available very soon. "With rising costs in energy and the explosion in big data, particularly from the Internet of Things, this is a critical challenge as it could lead to huge savings in storage capacity, which means less media costs and less energy consumption," IBM says in a blog post.


At MIT, a Glimpse Into Our Techno Future
Computerworld (04/06/16) Patrick Thibodeau

Massachusetts Institute of Technology (MIT) Media Lab researcher David Rose discussed futuristic systems that employ Internet of Things (IoT) technologies to enable a hyper-connected world at Tuesday's "Connected Things" Enterprise Forum. Among the concepts Rose cited were home-based aeroponic systems for growing vegetables, the incorporation of collapsible systems and movable walls into home and furniture design to support as-needed repurposing, mesh-networked residential systems, and connected-sharing services. Rose and other participants also raised questions about what goals these developments are supposed to fulfill, and their societal implications. "There has to be a standard about how these [IoT] devices communicate," says Itamco owner Joel Neideig. McKinsey Global Institute partner Michael Chui says about 60 percent of technology's value in the business-to-business sector cannot be realized without interoperability. Meanwhile, Sypris Electronics president John Walsh says the lack of a "perimeter" for cyber-physical systems thwarts traditional security measures, and he stresses "we want to get the carbon [humans] out of the loop." MIT professor Sanjay Sarma predicts a few calamities stemming from breaches are likely unavoidable, because security improvements will not come quickly enough. Consultant Rasmus Blom makes a case for IoT systems by arguing the instrumenting of systems brings us "much closer to the real need of people and society."


Data-Mining Algorithm Reveals the Stormy Evolution of Mathematics Over 700 Years
Technology Review (04/01/16)

University of Namur researcher Floriana Gargiulo and colleagues are using network science to map the links between mathematicians of the last 700 years to understand how the discipline of mathematics has evolved and spread. They analyzed the Mathematical Genealogy Project database listing each scientist's dates, geographical location, mentors, students, and discipline. The analysis began by using a machine-learning algorithm to check and update the data against other sources of information, and then the researchers built a network in which each scientist was a node and connections existed when one was a mentor or student of another. The team analyzed the resulting webs to detect in-network clusters, tipping points, and influential nodes. Standard clustering algorithms determined math can be split into 84 family trees, and 65 percent of the scientists in the database are derived from only 24 of these trees. The biggest tree originated in 1415 under the mentorship of a medical doctor in Italy, while countries' specific roles in producing mathematicians and how this has shifted over time was revealed. Other notable trends included the tendency for science-poor nations to import mathematicians while those with a stronger math tradition are exporters. Another key finding concerns the merger of math fields into new disciplines, such as statistics and probability's integration between 1930 and 1940.


Scientists Push Valleytronics One Step Closer to Reality
Berkeley Lab News Center (04/04/16) Dan Krotz

Scientists have experimentally demonstrated the ability to electrically generate and control valley electrons in a two-dimensional semiconductor. The team from the U.S. Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) coupled a host ferromagnetic semiconductor with a monolayer of transition metal dichalcogenide (TMDC). Electrical spin injection from the ferromagnetic semiconductor localized the charge carriers to one momentum valley in the TMDC monolayer. Valleytronic devices have the potential to transform high-speed data communications and lower-power devices, but gaining electrical control over the population of valley electrons has proven challenging for researchers so far. The Berkeley Lab breakthrough also is important because it involved TMDCs, which are considered to be more device-ready than other semiconductors that exhibit valleytronic properties. The team's research could lead to a new type of electronics that utilize all three degrees of freedom--charge, spin, and valley--which together could encode an electron with eight values of information compared with two in today's electronics. Future computer chips based on the technology would enable faster and more energy-efficient computing devices. "This is the first demonstration of electrical excitation and control of valley electrons, which will accelerate the next generation of electronics and information technology," says Berkeley Lab's Xiang Zhang, who led the study.


New Algorithm by Engineering Professor Could Optimize Netflix Recommendations
Columbia Spectator (04/04/16) Jerica Tan

Columbia University professor Shipra Agrawal has proposed an algorithm that could improve content recommendation systems such as the one used by Netflix. Agrawal's algorithm would base recommendations on what users watched in the past, but also on their unexplored preferences. She says such an approach is especially important for platforms looking to improve long-term recommendations, although there is risk associated with exploring too much when users are already confident in their likes and dislikes. Agrawal has developed an algorithm that can calculate the amount of exploration a user should face, along with the genres the user already has sufficiently explored. She says the algorithm differs from others currently in use because it can account for factors such as the many complications that arise in real-life applications. Factors can range from the changing preferences of a single user to the limits of a finite amount of content. "The algorithm automatically figures out which are the areas that you need to explore, which are the areas you are confident about," Agrawal says. "It will observe your response, and it will then tell whether the recommendation was good or not. But it will also keep account of what kind of recommendations it doesn't have data for."


TAU Uses 'Deep Learning' to Assist Overburdened Diagnosticians
American Friends of Tel Aviv University (04/04/16)

Tel Aviv University (TAU) researchers have developed a range of tools to facilitate the computer-assisted diagnosis of x-rays, computed tomographic (CT) scans, and magnetic resonance imaging (MRI). The researchers say the new system will enable radiologists to attend to complex cases that require their full attention, without spending as much time on simpler cases. "Our goal is to use computer-assisted 'Deep Learning' technologies to differentiate between healthy and non-healthy patients, and to categorize all pathologies present in a single image through an efficient and robust framework that can be adapted to a real clinical setting," says TAU professor Hayit Greenspan. The researchers want to use deep learning to develop diagnostic tools for the automated detection and labeling of pathologies in radiographic images. They have already developed the deep-learning technology to support automated chest x-ray pathology identification, liver lesion detection, MRI lesion analysis, and other tasks. "Such systems can improve accuracy and efficiency in both basic and more advanced radiology departments around the world," Greenspan says. The system is based on transfer learning, in which networks originally trained on regular images are used to categorize medical images. Greenspan notes the features and parameters that represent millions of general images provide a good signature for the analysis of medical images as well.


To Beat Go Champion, Google's Program Needed a Human Army
The New York Times (04/04/16) George Johnson

Behind the Google AlphaGo computer program's recent defeat of human Go champion Lee Se-dol were the many human brains that developed and executed the software. In AlphaGo, deep neural networks were trained on a database of past Go maneuvers by human players, and the algorithm played itself over and over again to refine this knowledge and become more proficient at the game. The combination of insensate learning with the Monte Carlo tree search method enabled the program to defeat Lee. Despite this victory, experts say artificial intelligence has a long way to go before it can compete with human intelligence. "Humans can learn to recognize patterns on a Go board--and patterns related to faces and patterns in language--and even patterns of patterns," notes Portland State University researcher Melanie Mitchell. "This is what we do every second of every day. But AlphaGo only recognizes patterns related to Go boards and has no ability to generalize beyond that--even to games similar to Go but with different rules." Computer scientists are attempting to create programs capable of more efficient generalizing, but for now the human mind remains superior to neural nets in this capacity.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


The Twittersphere Does Listen to the Voice of Reason--Sometimes
UW Today (04/04/16) Jennifer Langston

Tweets from the official accounts of government agencies, emergency responders, media, or companies at the center of a fast-moving story can slow the spread of rumors and correct misinformation that has taken on a life of its own, according to researchers at the University of Washington. The team from the Emerging Capacities of Mass Participation Laboratory in the Department of Human Centered-Design & Engineering and the Information School's DataLab documented the spread of two online rumors that initially spiked on Twitter--alleged police raids in a Muslim neighborhood during a hostage situation in Sydney, Australia, and the rumored hijacking of a WestJet flight to Mexico. The vast majority of tweets both affirming and denying the two rumors were retweets of a small number of Twitter accounts, and were largely driven by "breaking news" accounts that offer the veneer of officialdom. However, the "breaking news" accounts do not necessarily follow standard journalistic practices of confirming information. The rumors were successfully quashed by denials from official accounts. The team says the case studies offer crisis management lessons for organizations. The researchers presented their findings at the ACM CSCW 2016 conference in March.


Gestures Improve Communication, Even With Robots
Science Daily (04/04/2016)

A new study by researchers in the U.K. found it is easier for people to understand robot avatars when they use "iconic" hand gestures together with speech. Moreover, people are able to understand avatars using such multi-modal communication as well as they do other humans. Scientists Paul Bremner and Ute Leonards used a Microsoft Kinect sensor to track the arm gestures of a human actor, and then an avatar used the recorded data to mimic the movements. Bremner and Leonards filmed the human actor reading out a series of phrases while performing specific gestures, and then filmed the avatar using these recorded phrases and mimicking the gestures. Experiment participants watched the video and tried to identify what the human and avatar were trying to communicate. The researchers say getting a message across with an avatar is more important than ever. They note avatars are used by millions of people worldwide, and now are employed to entertain, teach, sell products, and solve problems.


Hardware, Software Tools Created to Debug Intermittently Powered Energy-Harvesting Devices
Phys.org (04/04/16)

Researchers at Carnegie Mellon University (CMU) and Disney Research have developed the Energy-interference-free Debugger (EDB), a system for finding computer bugs in small devices that scavenge energy from their environment and are subject to intermittent power failures. The researchers built a hardware and software platform that can monitor and debug these intermittent systems without interfering with the device's energy state. "The use of energy-harvesting devices will only proliferate as increasing numbers of sensor networks are deployed and other devices such as solar-powered microsatellites are invented," and it is important these devices have reliable software, so there must be tools to help detect and correct bugs, according to Disney Research vice president Jessica Hodgins. The researchers say the hardware-software debugging tool is the first to bring essential, familiar application development support to intermittent devices. "The key to our approach is that we provide flexible debugging support without interfering with the target device's power system," says CMU professor Brandon Lucia. The EDB system can passively monitor an energy-harvesting device for its energy level, input/output events, and program events, as well as manipulate the amount of energy stored on the device, enabling engineers to inject or remove power based on code execution.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe