Welcome to the December 5, 2014 edition of ACM TechNews, providing timely information for IT professionals three times a week.
Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
Volunteers Contribute Spare Cycles to Ebola Research
HPC Wire (12/04/14) Tiffany Trader
IBM announced that its World Community Grid will provide free virtual supercomputing power to The Scripps Research Institute's (TSRI) "Outsmart Ebola Together" volunteer computing project, which seeks to find a cure for the deadly disease. World Community Grid is a virtual supercomputer project established 10 years ago. Users download an IBM client onto their Apple or Windows PC or Android mobile device that ties them into a "grid computer," which uses the collective computing power of the devices to offer supercomputer-equivalent processing power. Since its inception, the World Community Grid has harnessed the computing power of nearly 3 million devices and assisted in more than 20 research projects. For the new project, users will download software developed by TSRI called AutoDock and AutoDock VINA, which will screen millions of chemical compounds for their effect on the Ebola virus. The most promising candidate compounds identified by the World Community Grid will be physically tested in the lab, hopefully leading to drug trials and an approved drug. The use of the virtual supercomputer will "dramatically accelerate the process of identifying a cure," according to IBM.
Computers That Teach by Example
MIT News (12/05/14) Larry Hardesty
Massachusetts Institute of Technology (MIT) researchers will detail a new system designed to improve decision-making by enabling pattern recognition that can be rendered into simple-to-understand examples at an upcoming Neural Information Processing Society conference. "We were looking at whether we could augment a machine-learning technique so that it supported people in performing recognition-primed decision-making," says MIT professor Julie Shah. Shah and colleagues were seeking to enhance unsupervised machine learning, in which the system looks for common elements in unstructured data, yielding a set of data clusters whose members are related in possibly non-obvious ways. The researchers modified an algorithm often used in unsupervised learning, basing the clustering on both data items' shared features and their similarity to some representative example, or prototype. In addition, instead of ranking shared features by importance, the tweaked algorithm attempts to winnow the list of features down to a representative set, or subspace. The algorithm's performance on classic machine-learning tasks was mostly equal to that of its precursor, while later experiments showed people using the new algorithm were more than 20 percent better at classification than those using a similar system based on current algorithms.
Technique Captures Unique Eye Traits to Produce More Realistic Faces
Creating realistic eyes has proven a major challenge for computer animators trying to create realistic digital faces, but researchers at Disney Research Zurich have developed a technique they say will make the process much easier and faster. The technique involves the use of multiple cameras and varied lighting to capture the shape and texture of an actor's eye. Features such as the shape and texture of the white sclera, the shape and refraction of the cornea, and the shape and color of the iris are unique to every individual and significantly contribute to what makes eyes look lifelike. The researchers' technique enables them to not just realistically model these features, but then to duplicate the ways they change in different lighting conditions and as the muscles around the eye move to create expressions. Pascal Berard, a computer graphics student at Disney Research Zurich, says although current techniques for digitally modeling eyes require a significant amount of manual effort, "our reconstruction technique can greatly reduce the time spent and help increase the realism of the eye." The researchers presented their technique at this week's ACM SIGGRAPH Conference on Computer Graphics and Interactive Techniques in Shenzhen, China.
Alan Turing Institute for Data Science to Be Based at British Library
London Guardian (United Kingdom) (12/04/14) Caroline Davies
The collection and analysis of big data will be the prime focus of the new Alan Turing Institute for Data Science, to be located at the British Library in London's new Knowledge Quarter and to be built at a cost of 42 million British pounds, according to chancellor George Osborne. The center is named after the pioneering mathematician credited as the father of modern computing science, who also played a crucial role in the cracking of the Enigma code that accelerated the end of World War II. "I think it is a fitting tribute to his name and memory that here, in the center of our capital, there is an institute that is named after him," Osborne says. Among the 35 academic, cultural, research, scientific, and media organizations participating in the Knowledge Quarter revitalization project are the British Library, Google, the Wellcome Trust, Camden Council, the British Museum, Central Saint Martins, University College London, the Francis Crick Institute, the Royal College of Physicians, and the Guardian newspaper.
Intelligence Agency Wants an Even More Super Supercomputer
NextGov.com (12/03/14) Frank Konkel
A new computer program of the U.S. Intelligence Advanced Research Projects Activity (IARPA) could fundamentally change the field of supercomputing. The Cryogenic Computer Complexity (C3) program will use recent breakthroughs in superconducting technologies to find a long-term successor to complementary metal-oxide semiconductor technology-based machines, which are becoming unmanageable. "Computers based on superconducting logic integrated with new kinds of cryogenic memory will allow expansion of current computing facilities while staying within space and energy budgets, and may enable supercomputer development beyond the exascale," says IARPA's Marc Manheimer. IARPA has awarded research contracts to teams led by IBM, Raytheon-BBN, and Northrop Grumman. C3 will develop critical components for memory and logic subsystems and plan a prototype computer. The goal is to integrate the components into the world's first superconducting supercomputer. The machine would be smaller, require less physical infrastructure to cool, and would have a smaller energy footprint than current supercomputers.
Jefferson Science Fellowship Experience
CCC Blog (12/03/14) Helen Vasaly; Stephanie Forrest
University of New Mexico computer science professor Stephanie Forrest describes her one-year Jefferson Science Fellowship at the U.S. State Department as an enriching experience. The program, which provides 13 science and engineering advisers to State and the U.S. Agency for International Development annually, are awarded to tenured professors across a broad range of science and engineering fields, and Forrest says she elected to concentrate on cyberpolicy. Her focus topics during the fellowship included Internet governance, botnet takedowns, cybersecurity, cloud computing, privacy, and big data. Forrest says of particular interest for her was participating in the inter-agency process that culminated in the U.S. announcement of plans to globalize responsibility for the Internet Assigned Numbers Authority functions. The concept is to place responsibility for these functions within a multi-stakeholder community, and discussions are underway to determine how to organize this community or how to make sure all Internet stakeholders are represented fairly. Forrest notes another project she worked on aimed to mitigate the risk of unintentional cyber conflict between countries via a series of confidence-building measures for cyberspace. She says she hopes her contributions "changed the way some people think and convinced them that deep technical knowledge about computer networks and cyberattacks is a crucial component for policymaking in this domain and that the civilian viewpoint counts."
How to Get More Latinas Into Tech
USA Today (12/03/14) Laura Mandaro
Silicon Valley could be more innovative if it drew from an even richer pool of ideas, suggests media start-up Vyv co-founder Laura Gomez. Gomez, 35, came to the U.S. from Mexico at age 10, enrolled as a computer science student at the University of California, Berkeley at 17, and appeared headed for a career in the field. However, Gomez felt overwhelmed in an introduction to computer science course with few women in the class. Gomez decided to pursue a master's degree in sociology and Latin American studies, then traveled, and finally reconsidered a career in the technology when she was faced with having to pay off her student loans. Over the years, Gomez has worked as a contractor for YouTube, led Twitter en espanol and headed internationalization and localization for Jawbone. She says Latinas and African-Americans can take a similar path, but believes tech companies also need to break out of their hiring patterns. Gomez says women and minorities must first be encouraged to pursue tech and science, and then recruited, supported, and promoted. "Tech is not just done by programming," she notes. "I want to see more girls like me in 20 years."
Stanford Engineers Take Big Step Toward Using Light Instead of Wires Inside Computers
Stanford Report (CA) (12/02/14) Chris Cesare
Stanford University researchers have developed a prism-like device that can split a beam of light into different colors and bend the light at right angles, a breakthrough they say could lead to computers that use optics to carry data. The device includes an optical link made of a tiny slice of silicon etched with a pattern that resembles a bar code. When a beam of light is shined at the link, two different wavelengths of light split off at right angles to the input, forming a "T" shape. "Light can carry more data than a wire, and it takes less energy to transmit photons than electrons," notes Stanford professor Jelena Vuckovic. The researchers also developed an algorithm that automates the process of designing optical structures, enabling them to create new nanoscale structures to control light. The new device was made by etching a tiny barcode pattern into silicon that splits waves of light in the manner of a small-scale prism. The Stanford algorithm designed a structure that alternated strips of silicon and gaps of air in a specific way, directing one wavelength of light to go left, and a different wavelength of light to go right. The algorithm gives researchers a tool to create optical components to perform specific functions.
DARPA Looks to Connect Complex System Security Dots and Wipe Out Malicious Cyberattacks
Network World (12/02/14) Michael Cooney
Greater visibility into the internal workings of computing systems could help boost security, according to researchers at the U.S. Defense Advanced Research Projects Agency (DARPA). Later this month, DARPA plans to detail a program that seeks to develop technologies that would record and preserve the provenance of all system elements and dynamically track interactions and casual dependencies among cyber-system components. Moreover, the technologies developed for the Transparent Computing (TC) program would assemble these dependencies into end-to-end system behaviors, and would be able to reason over the behaviors forensically and in real time. DARPA researchers liken modern computing systems to a black box, and the lack of visibility into their real machinery and software means researchers have a limited understanding of cyber behaviors to detect and counter attacks such as advanced persistent threats (APTs). "By automatically or semi-automatically 'connecting the dots' across multiple activities that are individually legitimate but collectively indicate malice or abnormal behavior, TC will enable the prompt detection of APTs and other cyber threats, and allow complete root cause analysis and damage assessment once adversary activity is identified," according to DARPA.
How Google "Translates" Pictures Into Words Using Vector Space Mathematics
Technology Review (12/01/14)
Google is applying the techniques it uses to translate text into different languages to a system that automatically creates captions for images. Google Translate uses a method known as vector space mathematics to translate text. The machine-learning technique focuses on where words appear in a sentence and in relation to other words, defining words as a vector in relation to one another and sentences as combinations of vectors. Google Translate translates a given sentence into a vector equation and then translates that equation into the other desired language. Google now is using this technique in a new system it calls Neural Image Caption (NIC). The system started by studying a dataset of 100,000 images and their captions to learn how to classify images and their contents. It then converts these captions into vector equations to learn how an image relates to the words used to describe it. When shown an unfamiliar image, the system generates a vector equation that can be plugged into the existing Google translation algorithm to generate a caption in any language. Judged by human evaluators on Amazon's Mechanical Turk, NIC's captions received a rating of 59, more than double the 25 rating of current state-of-the-art captioning techniques, although still behind human captioners at 69.
Cleaning Bot Operators Get Censored View of Your Home
New Scientist (12/03/14) No. 2998 Mark Harris
In the near future, it may be possible that household cleaning robots will be remotely controlled via the Internet, and they would ensure privacy by blurring the video feed, suggests University of Washington in Seattle professor Maya Cakmak. Willow Garage plans to spin out the technology as a business, linking domestic robots to "digital immigrant" workers worldwide. To test the system, Cakmak unveiled a PR2 robot in a private home in Arizona that featured digital filters in the robot's video feed. The most robust filter was an algorithm that pixelated parts of an image and added different colors to hide brands and logos. "It makes everything more abstract," Cakmak says. "Your house doesn't seem like your house anymore, it seems like any house." The filter also works for autonomous robots with cameras. Oregon State University's Bill Smart is similarly studying telerobotic privacy. He built a system for unskilled remote operators to change bed sheets using a PR2, but instead of blurring the image, Smart lets homeowners specify three-dimensional areas to censor, such as a bedside table, where operators would see a black space. He also developed a mat that automatically erases anything placed on it, and a hat that makes the wearer invisible in the robot's video feed.
A 24-Hour Social Innovation Hackathon to Fight World Hunger
University of California, Berkeley (12/02/14)
In late November, 36 people gathered at the University of California, Berkeley to participate in a Social Innovation Hackathon to support Heifer International's global activities to curb hunger. Participants came from the university as well as from across the San Francisco Bay Area, and were divided into groups to strategize, design, and code. Seven teams emerged with working prototypes following the 24-hour hacking marathon. The winning project was an Android app to support program monitoring and evaluation. The app was designed by William Wu and Jiehua Chen of Quantitative Engineering Design, and enables users to collect a standard set of data while in the field and cache it locally on their devices. When the devices detect an Internet connection, the stored information syncs to the database at Heifer headquarters, making data available online in the form of tables and maps to facilitate data analysis and visualization. The app also automatically georeferences and timestamps the data, and supports the inclusion of photos taken in the field. Many areas where Heifer International operates have poor or no Internet connectivity, requiring staff to use paper documentation and manual data entry at Heifer offices. The winning app enables the collection of accurate data in the field.
How Can the Global Internet Be Governed?
Brown University (12/02/14) Kevin Stacey
There is little governance for the Internet outside of domain name and Internet Protocol address management, and Brown University professor John Savage offered a working paper on tackling Internet governance at this week's fifth Global Cyberspace Cooperation Summit in Berlin. Savage says the paper focuses on the meaning of Internet governance, which can vary among different stakeholders. The paper outlines five wide-ranging issues, including network architecture, content control, cybercrime, cyberattacks, and human rights. Savage notes existing organizations set up to deal with such issues as human rights and crime should be authorized to manage Internet-related governance matters that fall into their area of specialty. A key challenge involves making these agencies Internet-savvy, and to do this Savage suggests the organizations hold forums with multiple stakeholders with Internet expertise to air these issues. "As a consequence, they can make better decisions and move forward more rapidly," he says. Savage cites decentralized Internet governance as beneficial, given its complexity and the greater likelihood of consensus on key issues among stakeholders if such a framework is implemented. "If one organization governs all of the Internet, some governments may be tempted to try to capture control of this organization and, thereby, have too great a say about Internet governance," he warns.
Abstract News © Copyright 2014 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.