Association for Computing Machinery
Welcome to the June 2, 2014 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).


NSA Collecting Millions of Faces From Web Images
The New York Times (06/01/14) James Risen; Laura Poitras

The U.S. National Security Agency (NSA) is intercepting massive numbers of facial images from communications tapped from its global surveillance operations for use in facial-recognition programs, according to top-secret documents acquired from former NSA contractor Edward J. Snowden. The documents show the agency deems facial images and other physical identifiers to be just as valuable in tracking down intelligence targets as written and oral communications. NSA stands apart from other agencies in its ability to match images with huge caches of private communications, although the University of Massachusetts' Dalila B. Megherbi says the facial-recognition algorithms can be affected by images taken from different angles and with different resolutions, which can lead to errors. In 2010, the NSA made a breakthrough in facial recognition when analysis matched images compiled separately in two databases, and this cross-referencing ability has led to a boom of analytical uses within the agency. NSA has established identity intelligence analysis teams who develop profiles of intelligence targets by matching facial images with other records about individuals. The documents' disclosure further worries civil-liberties proponents about privacy being jeopardized by improving technology used by government and industry. NSA ramped up its use of facial-recognition technology under the Obama administration following several intended attacks.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 

How to Make Robots Seem Less Creepy
The Wall Street Journal (06/01/14) Adam Waytz; Michael Norton

Recent research has shown the "uncanny valley" hypothesis for human-robot interaction is overstated and when emotional jobs must be "botsourced," people actually prefer robots that seem capable of conveying some degree of human emotion. The latest human-robot interaction research combines breakthroughs in robotics and psychology to suggest five important design features. The first idea is giving robots faces helps improve human-robot interaction. For example, the Massachusetts Institute of Technology's Nexi robot has more of a "baby face," and appears more capable of feeling than robots with longer chins, which appear more professorial. In addition, research shows child-faced robots are less likely to threaten elderly individuals, who will be the primary users of future robotic healthcare assistants. Research also shows a robot's voice is very important to its acceptance. One study found people trusted and enjoyed self-driving cars much more when the car had a voice than when it drove intelligently but silently. People also prefer robots that mimic their behavior. Mimicry provides a type of empathy that is important for human-robot interaction, and research shows it can be conveyed even by robots that do not have a physical presence. Finally, research shows giving robots some element of unpredictability can enhance robot acceptance.
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 

Team Develops a Software Able to Identify and Track an Specific Individual Within a Group
Spanish National Research Council (CSIC) (06/01/14)

Researchers from the Spanish National Research Council (CSIC) say they have taken a new approach to monitoring animals that move in groups, with hopes of learning their rules of interaction. A team from the Cajal Institute developed algorithms that enable the identification of each animal in a group, and then developed software called the idTracker. The software identification system first performs a search of the species when they are separated and can be differentiated, then identifies and recognizes its image in every frame of the video. The image of the individual, with its unique features, serves as the particular "footprint" of each animal, and even if animals hide and temporarily disappear, the program recognizes them when they reenter the scene. "From now on, we will be able to quantitatively determine the rules of animal behavior in groups taking into account the individuality of each animal," says Gonzalo G. de Polavieja, who led the study. He says the software can be used with a variety of different species, but could eventually be used to recognize people in large crowds or even vehicles or parts in a factory.

Security and Privacy? Now They Can Go Hand in Hand
CORDIS News (05/30/14)

Attribute-based Credentials for Trust (ABC4Trust), a European Union-funded research project, has developed a new way to keep systems secure while protecting the privacy of people online. The researchers say ABC4Trust makes it easy to manage online identities that increase privacy while maintaining security. The technology has been used in a pilot at a secondary school in Sweden to enable students to access counseling services online without identifying themselves by name. The pilot issued each student a "deck" of digital certificates that validate information such as their enrollment status and date of birth, and the students could use one of the certificates that uses a pseudonym to verify they are enrolled at the school. Another pilot enabled students at the University of Patras in Greece to give anonymous feedback on courses and lectures. The technology also ensured that only registered students could take part in polls. Project coordinator Kai Rannenberg expects more public services and other organizations will use the technology in the near future.

Think Fast, Robot
MIT News (05/30/14) Larry Hardesty

Researchers at the Massachusetts Institute of Technology (MIT) and the University of Zurich say they have developed the first state-estimation algorithm to process data from event-based sensors. The researchers say a robot running the new algorithm could update its location about every 1/1,000 of a second, enabling it to perform much more nimble maneuvers. With an event-based sensor, "each pixel acts as an independent sensor," says Andrea Censi, a research scientist in MIT's Laboratory for Information and Decision Systems. Censi says one advantage of the algorithm is that it does not have to identify features because every event is intrinsically a change in luminance, which is what defines a feature. Moreover, since the events are reported so quickly, the problem of matching becomes much simpler because there are not as many candidate features to consider. The algorithm also does not try to match all the features in an image at once. Instead, for each event, it generates a set of hypotheses about how far the robot has moved, corresponding to several candidate features. During testing, the researchers say the algorithm proved just as accurate as existing state-estimation algorithms. "This has quite some potential for specific high-speed robot motions," says Swiss Federal Institute of Technology professor Roland Siegwart.

How MIT and Caltech's Coding Breakthrough Could Accelerate Mobile Network Speeds
Network World (05/29/14) Steven Max Patterson

Researchers from the Massachusetts Institute of Technology, the California Institute of Technology, and the University of Aalborg say they have successfully transmitted data without link layer flow control overloading throughput with retransmission requests, and have optimized transmission size for network efficiency and application latency limitations. They achieved this stateless transmission using Random Linear Network Coding (RLNC), and the universities are collaborating to commercialize the technology via the Code On Technologies effort. Data issued by Code On indicates RLNC was 13 percent to 465 percent faster than industry standard Reed-Solomon encoding in storage area network erasure applications testing. Code On also released data demonstrating RLNC enhanced the throughput of mobile video over Wi-Fi. An RLNC transmission can recover from errors with neither sender nor receiver storing and updating transmission-state information and requesting retransmission of lost packets, as RLNC can reproduce any packet lost on the receiving side from a later sequenced packet. The sender can continuously transmit at near-wire speed optimized for latency and network throughput because the RLNC encoding sender does not have to listen for acknowledgements of successful transmission and perhaps resend. Moreover, RLNC encoding can ride atop the TCP-IP protocol, so implementation does not necessitate replacement of communications equipment.

Google Invests in Satellites to Spread Internet Access
The Wall Street Journal (06/01/14) Alistair Barr; Andy Pasztor

In an effort to overcome financial and technical problems associated with past efforts, Google plans to spend more than $1 billion on a group of satellites designed to extend Internet access to unwired regions of the world. The project will start with 180 small, high-capacity satellites orbiting the Earth at low altitudes, and then will expand with more, larger satellites. Google also is working on Project Loon, which involves high-altitude balloons providing broadband service to remote parts of the world. "Google and Facebook are trying to figure out ways of reaching populations that thus far have been unreachable," says analyst Susan Irwin. Google's efforts to deliver Internet service to unserved regions via multiple projects are consistent with how it approaches other new markets, because even if one or more projects do not succeed, the company can use what it learned from those failures in other areas. Google's satellite effort will have to overcome regulatory hurdles, such as coordinating with other satellite operators so the new fleet does not interfere with others. However, if Google succeeds, it "could amount to a sea change in the way people will get access to the Internet, from the Third World to even some suburban areas of the U.S.," says analyst Jeremy Rose.
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 

ASC14 Marks Seventh Win for GPUs
HPC Wire (05/29/14) Tiffany Trader

A team of Shanghai Jiao Tong University (SJTU) researchers recently took the top spot at ASC14, the world's largest student supercomputer challenge, which was held in April at Sun Yat-sen University in Guangzhou, China. The SJTU team, which built a cluster comprised of eight NVIDIA K20 graphics-processing unit (GPU) accelerators, earned the highest combined scores for a series of six tests, including an elastic wave modeling application, 3D-EW; a quantum chemistry application, Quantum ESPRESSO; and other real-world scientific programs. The SJTU team practiced running code on an SJTU supercomputer, which is equipped with 100 NVIDIA Tesla K20 GPUs that make up 50 percent of the system's computational power. The SJTU researchers also reviewed the source codes used in the competition and identified the best optimization methods. Although SJTU's team did the best overall, China's Sun Yat-sen University team achieved a record 9.27 teraflops using 216 processor cores and eight NVIDIA K40 GPUs. Competitors focused on deep and fine strategic optimization for LINPACK testing that could best exploit the heterogeneous acceleration technology and improve floating-point computing capacity, says Sun Yat-sen team adviser Ye Weicai.

How EBay's Research Laboratories Are Tackling the Tricky Task of Fashion Recommendations
Technology Review (05/29/14)

EBay Research Labs' Anurag Bhardwaj and colleagues have developed two distinct fashion recommendation algorithms and then crowdsourced opinions about whether the recommendations they provide are worthwhile. The algorithms are trained on a data set comprised of more than 13,000 photographs of fashion models found online. In each image, the model is wearing a top and bottom ensemble allowing the algorithms to seek correlations between the different top and skirt combinations. The deterministic fashion recommender algorithm appraises the colors in the top and compares them to the colors in the bottom, and then assigns each combination a rating that can be compared to other top-bottom combinations. The second algorithm employs the predefined rule that patterned clothing coordinates well with apparel that has a solid color, which guarantees that when it is presented with a patterned top, for example, all of its recommendations will be for a skirt with a solid color. To determine if the recommendations were useful, the researchers queried 150 people to rate them, and they found people generally prefer a solid-colored skirt with a patterned top combination. Another finding was users favored simple patterns over complex patterns.

Sex Harassment App Helps Women Map Abuse
New Scientist (05/29/14) Paul Marks

A team at Cornell University has collaborated with researchers at the Bangladesh University of Engineering and Technology and North South University, both in Dhaka, on Protibadi, a smartphone app designed to combat sexual harassment. Protibadi, which means "one who protests" in Bengali, can send a shrill rape alarm via an onscreen button. The app also will forward help and location text messages to emergency contacts. In addition, the app can collate incident data from all users to create a heat map showing areas where harassment is at its worst. Users will be able to annotate the data with a brief blog post about the type of harassment experienced. Women who have signed up to try the app said they felt safer having it installed on their phone. "They loved the fact that they had one-touch emergency access to their friends any time they needed help," says Cornell's Ishtiaque Ahmed. "Most of the participants considered the map useful in choosing their routes around Dhaka city."
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 

Google Turns to Machine Learning to Build a Better Data Center
ZDNet (05/28/14) Nick Heath

Google is looking to neural networks to improve the efficiency of its data centers. Neural networks are machine-learning algorithms that imitate the functioning of the human brain, specifically the interactions between neurons. Google mechanical engineer and data analyst Jim Gao says a typical large-scale data center generates millions of data points across thousands of sensors daily, but this data is primarily used for monitoring purposes only. However, Gao says advances in processing power and monitoring capabilities open a large opportunity for machine learning to improve efficiency. Forecasting data center efficiency with traditional engineering formulas is challenging due to the volume of data generated by mechanical and electrical equipment, and the complex interactions of connected systems. Neural networks can create models that predict outcomes from complicated systems because they can search for patterns within data and interactions between systems without requiring an upfront definition of interactions. Gao says Google can use models to forecast data center power usage effectiveness, automatically flag problems, discover energy-saving opportunities, and test new configurations.

Digital Actors Go Beyond the Uncanny Valley
IEEE Spectrum (05/27/14) Tekla S. Perry

Graphics specialists are close to developing interactive and photo-realistically lifelike digital humans that will transform acting, entertainment, and computer games. Experts say over the next decade computer-game characters will become indistinguishable from filmed humans, and a convergence of movies and games will result in new forms of entertainment. The entertainment industry hopes to seamlessly blend real and virtual worlds so that if people can imagine something, they will be able to actually see it. Filmmakers already use computer-generated humans, but doing so currently requires enormous amounts of computing resources. Five minutes of film requires 7,200 frames, based on the standard rate of 24 frames per second, making realistic digital doubles currently unfeasible for video-game creators. However, game makers believe realistic digital doubles will appear in games within a decade due to technological advances. Bringing this closer to a reality, the University of Southern California has created a real-time, photo-real digital human character called Digital Ira. Digital Ira starts with image data created by a three-dimensional scanning system, and then graphics processors turn the data into a moving image using calculations that determine movement and scene lighting. Individual frames appear on screen as they become available, at a rate of 60 frames per second.

Abstract News © Copyright 2014 INFORMATION, INC.
Powered by Information, Inc.

To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe