Association for Computing Machinery
Welcome to the April 13, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Universities Aren't Doing Enough to Train the Cyberdefenders America Desperately Needs
The Washington Post (04/12/16) Andrea Peterson

U.S. universities may not be doing enough to prepare the next generation of cyberdefenders, suggests the findings of a new analysis from CloudPassage. None of the top 10 computer science programs as ranked by U.S. News & World Report in 2015 requires graduates to take even one cybersecurity course, and three of the top 10 programs do not even offer an elective cybersecurity course. Among the top 36 computer science programs, only the University of Michigan requires students to take a security course to graduate. The research reinforces what many have said about the gap in information technology (IT) security skills, says CloudPassage CEO Robert Thomas. "But what we've revealed is that a major root cause is a lack of education and training at accredited schools," Thomas notes. Although it is possible students could be learning cybersecurity skills as a component of other classes, universities should be doing more to emphasize security, according to David Raymond, deputy director of Virginia Polytechnic Institute and State University's IT security lab. "We're not operating on the same network we were operating on 10 years ago or even five years ago," Raymond says.
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Predicting Gentrification Through Social Networking Data
University of Cambridge (04/13/16) Sarah Collins

A multi-university study led by the University of Cambridge and presented today at the 25th International World Wide Web Conference used social networking data to predict neighborhood gentrification. The team employed data from about 37,000 users and 42,000 venues in London to construct a network of Foursquare places and the parallel Twitter social network of visitors, accumulating more than 500,000 check-ins over 10 months. The data was used to measure the social diversity of various neighborhoods and venues by differentiating between locales that bring together strangers compared to those that bring together friends, as well as places that draw diverse individuals versus those that attract regulars. Correlating these metrics with well-being indicators for various neighborhoods revealed rising housing prices, reduced crime rates, and other gentrification indicators were most pronounced in deprived areas with high social diversity. "We found that the most socially cohesive and homogeneous areas tend to be either very wealthy or very poor, but neighborhoods with both high social diversity and high deprivation are the ones which are currently undergoing processes of gentrification," reports Cambridge's Desislava Hristova. The researchers say gentrification prediction could help local governments and policymakers improve urban development plans and mitigate the adverse impact of gentrification while benefiting from economic expansion.


CCC Announces Industry-Academic Program With the Big Data Regional Hubs
CCC Blog (04/11/16) Khari Douglas

The Computing Community Consortium (CCC) has announced its sponsorship of a program on Industry-Academic Collaboration to catalyze and nurture alliances between industry and academic research by producing mechanisms for early-career researchers and industry representatives to engage and explore collaborative efforts. The CCC will administer the program via the four Big Data Regional Innovation Hubs (BD Hubs) sponsored by the U.S. National Science Foundation. The Northeast BD Hub will concentrate on the Young Innovator Internship and the Knowledge Exchange, and also host a Data Science Best Practices Workshop to review and establish a set of best practices for data sharing and associated issues. The South BD Hub will offer the Data Start internship program, which gives graduate students the opportunity to work with data-related startups, and the Program to Empower Partnerships with Industry to support early-career professionals in summer exchange visits with data-related industry partners. The West BD Hub will spearhead initiatives to inspire cross-sector partnerships and boost public engagement with data science. One upcoming meeting sponsored by the West BD Hub will host a "Collaboratory Faire" showcasing open source tools and teams exploiting open data or open platforms. Finally, the Midwest BD Hub's concentration will cover a wider spectrum of activities, including workshops and summer visits to data-related industry partners, travel grants to data-related venues, and other academia-industry community-building efforts benefiting early career researchers.


Turing Tests and the Problem of Artificial Olfaction
Technology Review (04/07/16)

The ability to reproduce scent artificially, which involves measuring an odor at one point in space and then replicating it in another, is surprisingly complex, and the work of the Weizmann Institute of Science's David Harel sheds light on the issue. Harel says smell reproduction consists of three components: a "sniffer" device that converts an input odor into a digital signature, a "whiffer" device containing a spectrum of fixed smells that can be combined and issued in carefully measured quantities and concentrations, and the sniffer/whiffer interface. "[This] analyzes the signature coming from the sniffer and instructs the whiffer as to how it should mix its pallet odorants to produce an output odor that is perceived by a human to be as close as possible to the original input," Harel says. However, he notes that unlike with images or sounds, approximation of odors is not good enough. Using the Turing test as a rough template, Harel has developed an approach in which a human is asked to differentiate real odors from those generated by an artificial olfactory system. Via immersion in audio and video of the locale where the smell was collected, the subject can decide whether the related odor is real or artificial, with the system's performance gauged by repeating the procedure with many different samples and testers.


Soft Robotic Fingers Recognize Objects by Feel
Product Design & Development (04/12/16) Kaylie Duffy

Daniela Rus at the Massachusetts Institute of Technology's (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL) has led the development of bendable, stretchable robot fingers that can lift and handle delicate objects. The silicone rubber digits do not need specific commands for grasping various objects, but instead expand to accommodate an item and grasp radially. The three fingers in the robotic hand each have special "bend sensors," which estimate the size and shape of the object with sufficient accuracy to identify it from a list. The sensors feed the robot data on the location and curvature of the grasped object so the system can pick up an unfamiliar object and compare it to already extant clusters of data points from past items. The robot is currently able to obtain three data points from a single grasp, enabling its algorithms to distinguish between similarly-sized objects. The robotic hand also can hold an object by the tips of its fingers or envelop it completely. The CSAIL researchers say further sensor innovation should eventually enable the system to differentiate between dozens of various objects. Rus' Distributed Robotics Lab is developing a second-generation four-fingered hand that incorporates both resistance and force sensors so they can change their value based on the latter's compaction.


Why That Emoji Grin You Sent Might Show Up As a Grimace
The Washington Post (04/12/16) Andrea Peterson

People interpret emojis differently, a disparity exacerbated by differences in how an emoji displays on one device compared to another, according to research from the University of Minnesota's GroupLens Lab. Companies are the final arbiters of what an emoji should look like for users, as the Unicode computer industry character/text standard does not provide an actual emoji image, but only a code and description. The GroupLens study examined 22 distinct emoji and how they appear on five different smartphones, with more than 300 participants viewing a random subset of 15 of the emoji versions, describing them, and ranking them on a scale of -5 to 5. Respondents rated the Apple version of the "grinning face with smiling eyes" emoji as negative, while the grinning emojis from Google, Microsoft, LG, and Samsung were rated positive, on average. Moreover, although more respondents considered Apple's version of "grinning face with smiling eyes" to be negative than positive, there was strong disagreement even though they were all viewing the same image. Across-the-board differences between how people interpreted the emoji they studied suggest emoji users could be putting themselves at risk of misinterpretation, according to the GroupLens researchers.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


World's Largest Project to Expand Our Understanding of Evolution
University of Southampton (United Kingdom) (04/08/16)

An international multi-disciplinary team that includes University of Southampton professor Richard Watson is engaged in the world's largest project focused on expanding the theory of evolution with new viewpoints on relationships between genes, organisms, and environment. It is oriented around extended evolutionary synthesis, a new way of thinking about evolutionary biology that runs parallel to traditional thinking. "The main difference from traditional perspectives is that the extended evolutionary synthesis includes a greater set of causes of evolution," says project leader and University of St. Andrews professor Kevin Laland. "This shifts the burden of explanation for adaptation and diversification away from a one-sided focus on natural selection and towards the constructive processes of development." Watson will direct two initiatives to broaden the understanding of both evolutionary developmental biology and evolutionary ecology feedbacks using theoretical computer science tools. "In computer science, these feedbacks are well understood in the framework of learning systems," Watson says. His recent research characterizes the formal evolution-learning connections that enable results to be transferred from computer science to expand the understanding of biological evolution. "This work suggests that these feedbacks are not just 'a complication' but change the capabilities of Darwinian evolution; specifically, evolution is smarter than we realized," Watson says.


How Microsoft Conjured Up Real-Life Star Wars Holograms
Wired (04/08/16) Brian Barrett

Microsoft researchers in the HoloLens unit sought to develop a more engaging communications platform for children than the usual media, and their solution is holoportation, in which a live hologram of a person is projected into another room, enabling real-time interaction with whomever is present in the room. The system begins with three-dimensional (3D) capture cameras positioned strategically around a given space, which capture every possible viewpoint that are stitched together into a 3D model by custom software. Project leader Shahram Izadi says this step is continuous, as more frames of data lead to models of higher quality, with data-crunching handled by off-the-shelf graphical-processing units. "We want to do all of this processing in...around 33 milliseconds to process all the data coming from all of the cameras at once...and also create a temporal model, and then stream the data," Izadi notes. Compression also is an essential step of holoportation, given the massive volumes of data produced and transmitted. One problem with the process is the overlap of the 3D image with furniture and other objects in rooms, but this can be addressed by training the cameras to only focus on the items the user wants to holoport. "The end goal and vision for the project is really to boil this down to something that's as simple as a home cinema system," Izadi says.


What Social Media Data Could Tell Us About the Future
Northeastern University News (04/07/16) Thea Singer

Northeastern University researchers are working with a group of scientists to develop a method to map how tweets about large-scale social events spread. Knowing the characteristics of that buildup could enable researchers to prepare ahead of time for undesirable repercussions from such events, says Nicola Perra, a former research asso­ciate at Northeastern's Net­work Sci­ence Insti­tute. "What we are trying to under­stand is the pres­ence of pre­cur­sors: Can we find a signal in the flow of infor­ma­tion that will tell us some­thing big is about to happen?" says Northeastern professor Alessandro Vespig­nani. The researchers used network modeling in neuroscience to conduct the study, having nodes represent cities and the links represent the pathways the tweets take over time. For example, in 2011, Spanish protests sparked the Occupy Wall Street movement in the U.S., and the tweets gained in volume and intensity until they reached a "social tipping point of collective phenomenon" on May 20, 2011. "You create a system that starts from a few nodes that then drive others, and so on, until every­body is talking to every­body else in a full coor­di­na­tion of the infor­ma­tion," Vespignani says. The researchers focused their study on the 2011 protest in Spain, the Brazilian Autumn protest in 2013, the release of a Hollywood blockbuster movie in 2012, and Google's acquisition of Motorola in 2014.


Scientists Invent Robotic 'Artist' That Spray Paints Giant Murals
Dartmouth College (04/07/16) John Cramer

Dartmouth College researchers have invented a "smart" paint spray can that robotically reproduces photographs as large-scale murals. Researchers from ETH Zurich, Disney Research Zurich, and Columbia University collaborated on the project. The system uses an ordinary paint spray can, tracks the can's position relative to the wall or canvas, and recognizes what image it "wants" to paint. As the artist waves the pre-programmed spray can around the canvas, the system automatically operates the spray on/off button to reproduce the specific image as a spray painting. The prototype includes two webcams and quick response-coded cubes for tracking, and a small actuation device for the spray can, attached via a three-dimensional-printed mount. Paint commands are transmitted via a radio directly connected to a servo-motor operating the spray nozzle. The real-time algorithm determines the optimal amount of paint of the current color to spray. "In this research, we show that by combining computer graphics and computer-vision techniques, we can bring such assistance technology to the physical world even for this very traditional painting medium, creating a somewhat unconventional form of digital fabrication," says Dartmouth professor Wojciech Jarosz.


Google, Facebook CAPTCHAs Beat by Bot
InformationWeek (04/08/16) Thomas Claburn

Columbia University researchers have created an automated system to bypass the Completely Automated Public Turing tests to tell Computers and Humans Apart (CAPTCHAs) used by Google and Facebook. The researchers say their system "is extremely effective, automatically solving 70.78 percent of the image reCAPTCHA challenges, while requiring only 19 seconds per challenge." The CAPTCHA research is an example of a security arms race in which defenses get compromised and then hardened, only to be overcome again, notes Columbia University's Iasonas Polakis. However, in the past few years advances in generic solvers against text CAPTCHAs have made distorted text challenges obsolete, and researchers have turned to more advanced tasks such as extracting semantic information from images. The novel attacks developed by the Columbia researchers underscore the difficulty facing those trying to design functional CAPTCHAs. "We believe that the capabilities of computer vision and machine learning have finally reached the point where expectations of automatically distinguishing between humans and bots with existing CAPTCHA schemes, without excluding a considerable number of legitimate users in the process, seem unrealistic," Polakis says. "As these capabilities can only improve, it will become even more difficult to devise CAPTCHAs that can withstand automated attacks."


Trawling the Net to Target Internet Trolls
Lancaster University (04/05/16)

A team from Lancaster University's Center for Corpus Approaches to Social Science (CASS) has developed the Filter, Identify, Report, and Export Analysis Tool (FireAnt), free software that can pinpoint trolls on social networks. The CASS researchers, led by Claire Hardaker, say FireAnt quickly downloads, analyzes, and filters out the noise from millions of messages, leaving users with relevant and useful information for further investigation, all at the touch of a button. FireAnt can handle data from Twitter and other online sources such as Facebook and Google+, and uses practical filters such as user-name, location, time, and content. "The filtered information can then be presented as raw data, a time-series graph, a geographical map, or even a visualization of the network interactions," Hardaker says. She notes the software leaves signals that can be used to identify accounts, texts, and behaviors of interest. Hardaker says trying to identify relevant messages from websites such as Twitter is a challenge because they produce data at very high volumes. "It will allow the ordinary user to download Twitter data for their own analyses," she says. "Once this is collected, FireAnt then becomes an intelligent filter that discards unwanted messages and leaves behind data that can provide all-important answers."


Why Robots Need to Be Able to Say 'No'
The Conversation (04/08/16) Matthias Scheutz

Robots' blind obedience to human instructions can lead to harmful results and unwanted outcomes, creating a case for the machines to be programmed to detect the potential harm their actions could cause and respond by either trying to avoid it or refusing to obey the order, writes Tufts University professor Matthias Scheutz. He says his lab has begun developing robot controls "that make simple inferences based on human commands. These will determine whether the robot should carry them out as instructed or reject them because they violate an ethical principle the robot is programmed to obey." Understanding the potential hazards of instructions requires substantial background knowledge, and the robot must gauge not only action outcomes by themselves, but also the intentions of the people giving the instructions. Scheutz says this involves making robots capable of explicitly reasoning through consequences of actions and comparing results to established social and moral precepts dictating what is and is not desirable or legal. "In general, robots should never perform illegal actions, nor should they perform legal actions that are not desirable," he notes. "Hence, they will need representations of laws, moral norms, and even etiquette in order to be able to determine whether the outcomes of an instructed action, or even the action itself, might be in violation of those principles."


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe