Association for Computing Machinery
Welcome to the February 26, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Pioneering Stanford Computer Researcher and Educator Edward McCluskey Dies
Stanford Report (02/25/16) Tom Abate

Stanford University professor Edward J. McCluskey, a pioneering researcher in electronics and computing, died on Feb. 13 at the age of 86. "He was the father of modern digital design," says IBM Research's Arvind Krishna. As a staff researcher at Bell Telephone Laboratories, McCluskey contributed significantly to designing logic chips, while the Quine-McCluskey algorithm paved the way for the automated design of complex chips and helped facilitate the success of the semiconductor industry. McCluskey's notable research accomplishments include founding the Stanford Digital Systems Laboratory, co-founding the Stanford Computer Engineering Program, and establishing the Center for Reliable Computing (CRC). The CRC made key contributions to the testing of computer chips and the design of fault-tolerant systems to prevent computer crashes. In 2012, McCluskey was honored with the IEEE John von Neumann Medal for lifetime achievement, and also received ACM's Special Interest Group on Design Automation (SIGDA): Pioneering Achievement Award in 2008. McCluskey also served as the first president of the IEEE Computer Society. Synopsys chairman Aart de Geus compares McCluskey "to a great oak tree that we suddenly see fall."


Why Tech Degrees Are Not Putting More Blacks and Hispanics Into Tech Jobs
The New York Times (02/25/16) Quoctrung Bui; Claire Cain Miller

Technology firms say a lack of black and Hispanic workers is mainly the result of too few qualifying graduates and applicants, but new research indicates the population of black and Hispanic computer science and engineering majors outnumber those in tech jobs. An analysis of U.S. Education Department data by University of Connecticut professor Maya A. Beasley found black students who studied science and technology were less likely than white students to stay with their majors when they felt they were underperforming, while those who stuck with their majors were less likely to apply for technical jobs. Beasley says they often pursued nonprofit or business work, sometimes because they were discouraged by negative stories about tech company culture, and by the fact that few black people work there. "Any student of color looking at the numbers from the tech giants is going to be turned off and wary about taking a job there because it tells you something about what the climate is," she says. "They don't want to be the token." Meanwhile, there are suggestions recruitment efforts by tech companies often exclude or overlook historically black colleges, or black and Hispanic students in prestigious institutions, from their mainstream networks. Research also determined during hiring, managers are prejudiced against black-sounding names on resumes, among other factors.
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 


The Incredible Things Google's New Robot Can Do
The Washington Post (02/25/16) Matt McFarland

Atlas, a two-legged, untethered humanoid robot created by Google subsidiary Boston Dynamics, can walk on rough and unpredictable ground and always catch its balance when it slips. "It's a huge step towards getting robots that can actually operate in our world in unstructured environments on uneven terrain," says Georgia Institute of Technology professor Aaron Ames. Atlas also can place boxes on shelves, open doors, and get back up after being knocked to the ground. Although the robot has no immediate consumer use, experts see it as a step closer to machines that can prepare meals, care for senior citizens, and perform other assistive functions. "That robot is out of the lab, outdoors, with no wires," reports Carnegie Mellon University roboticist Chris Atkeson. "It slips, but doesn't fall down. We're turning a corner in making robots better. The great unknown is when are we going to build hands and robot skin that is useful?"
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Big-Data Visualization Experts Make Using Scatter Plots Easier for Today's Researchers
NYU Polytechnic School of Engineering (02/18/16)

The big data era complicates the use of scatter plots because of the vast datasets involved, requiring significant streamlining if researchers are to glean useful information. Although algorithmic methods have been developed to identify plots with one or more patterns of interest to researchers, little attention has been devoted to validating their results by comparing them to those achieved when human observers and analysts read large sets of plots and their patterns. A new study conducted by data-visualization researchers at New York University Polytechnic Institute's Tandon School of Engineering found outcomes acquired via algorithmic methods do not always correlate well to human perceptional assessments when grouping scatter plots according to similarity. The team, led by professor Enrico Bertini, cited variables influencing such perceptual judgments, but they argue further work is necessary to develop perceptually balanced measures for analyzing large sets of plots, in order to better guide researchers who must routinely deal with high-dimensional data. The researchers' 2016 paper will receive an honorable mention at the ACM CHI 2016 conference, which takes place May 7–12 in San Jose, CA.


Will the NSA Finally Build Its Superconducting Spy Computer?
IEEE Spectrum (02/24/16) David C. Brock

The U.S. National Security Agency's vision of a superconducting supercomputer that could save vastly more power while crunching data and deciphering codes much faster than transistor-based systems may leap forward with the U.S. Intelligence Advanced Research Projects Activity's (IARPA) Cryogenic Computing Complexity (C3) program. The first stage of C3 involves fabricating and assessing 64-bit superconducting logic circuits that operate at a 10-GHz clock rate and memory systems that can store about 250 MB, at the Massachusetts Institute of Technology's Lincoln Laboratory. C3 teams led by IARPA-chosen "performers" such as Northrop Grumman, Hypres, and Raytheon BBN Technologies are concentrating on different components. One technology, reciprocal quantum logic circuits, can consume 1/100,000 the power of the best equivalent complementary metal-oxide semiconductors. Also being developed is a cryogenic memory system that controls, reads, and writes to high-density, low-power magnetoresistive random-access memory. If the first phase of C3 meets with success, a two-year second phase will integrate these core components into an operating cryogenic computer prototype. C3 director Marc Manheimer thinks a true superconducting supercomputer could be realized in another five to 10 years if the prototype shows promise. He speculates this system could run at 100 petaflops while consuming 200 kilowatts.


Google Unveils Neural Network With 'Superhuman' Ability to Determine the Location of Almost Any Image
Technology Review (02/24/16)

Google computer scientists have trained a deep-learning machine called PlaNet to determine the location of almost any photo using only the pixels it contains. The researchers initially divided the world into a grid consisting of more than 26,000 squares of varying size depending on the number of images taken in that location. They then created a database of geolocated images from the Internet and used the location data to determine the grid square in which each image was taken. The dataset consisted of 126 million images as well as their corresponding Exif location data, and the researchers used 91 million of these images to teach a neural network to determine the grid location using only the image itself. They validated the neural network using the remaining 34 million images in the dataset. Finally, the researchers measured the accuracy of the machine by feeding it 2.3 million geotagged images from Flickr to see if it could correctly determine their location. "PlaNet is able to localize 3.6 percent of the images at street-level accuracy and 10.1 percent at city-level accuracy," says Google researcher Tobias Weyand. In addition, the machine can determine the country of origin in 28.4 percent of the photos and the continent in 48 percent of them.


Mapping Out a New Role for Cognitive Computing in Science
HPC Wire (02/25/16) John Russell

Dealing with the flood of data generated by experimental instruments and sensors requires a community-wide agenda to devise cognitive computing tools that can conduct scientific activities currently performed by humans, according to a new white paper from the Computing Community Consortium (CCC). The authors argue such innovation will "leverage and extend the reach of human intellect, and partner with humans on a broad range of tasks in scientific discovery." Co-author and Pennsylvania State University researcher Vasant Honavar says the tools must be developed by close collaboration between disciplinary scientists and computer scientists. The authors contend transforming cognitive computing into "smart assistants" and "smart colleagues" for human researchers is a necessary paradigm shift, which also will lead to new modes of scientific research. The CCC report identifies two key agenda components, including development, analysis, integration, sharing, and modeling of algorithmic or information processing abstractions of natural processes, combined with formal methods and tools for their analyses and simulation. The second element is to prompt innovations in cognitive tools that enhance and extend human intellect and partner with humans in all aspects of science. The CCC's value proposition cites an ability to assemble a research team that is optimally equipped to answer a given scientific question, as well as the ability to monitor scientific progress, maturation of scientific disciplines, and scientific impact.


IBM Watson Machine Learns the Art of Writing a Good Headline
TechRepublic (02/22/16) Nick Heath

The latest milestone for IBM researchers in their development of the Watson platform is the creation of a "state-of-the-art" system for automatically abstracting documents. Using a deep-learning strategy, the research team tasked with improving Watson's question answering algorithms generated short summaries of millions of English newswire reports. "In this work, we focus on the task of text summarization, which can also be naturally thought of as mapping an input sequence of words in a source document to a target sequence of words called summary," the researchers note. The deep learning-based sequence-to-sequence approach they employed is more commonly used for machine translation. The researchers point out abstracting text differs significantly in that the summary is usually short and does not heavily rely on document length, and it is acceptable to omit all but the core concepts in the source material. The researchers report the use of an attentional encoder-decoder recurrent neural network to summarize text offers superior performance over a recent cutting-edge model used by Facebook to generate summaries. "They are surprisingly good and would easily pass muster for a human-generated summary in most cases," the team says. The ability of machines to summarize text so they capture its key meaning is important if computers are to obtain a human-like understanding of language.


'Black Girls Code' Aims to Reboot Diversity in Tech
CNBC.com (02/22/16) Jodi Gralnick

Black Girls Code is working to bring more diversity to the tech industry. Founded by biotech industry veteran Kimberly Bryant in 2011, the nonprofit group brings together girls between 7 and 17 years of age for weekend coding workshops. Participants learn how to develop websites, create mobile apps, and build robots. Eight girls participated in the first coding workshop in San Francisco, but the nonprofit now counts chapters in Atlanta, Chicago, Detroit, Memphis, New York, Raleigh-Durham, San Francisco, Washington D.C., and Johannesburg, South Africa. Bryant decided to create Black Girls Code after her daughter expressed interest in following in her mother's footsteps. She felt culturally isolated when she was an electrical engineering student and found few people of color when she began her career. Bryant says black women make up less then 3 percent of the workforce at the biggest tech companies in the country, and notes more than 5,000 girls have participated in coding workshops. "There is a lack of role models, a flat-out lack of exposure, a stereotype bias of what is a technical person," says Ruthe Farmer at the K-12 Alliance for the National Center for Women and Information Technology (NCWIT). "It's held by parents, it's held by the media, it's held by teachers."


Yahoo Releases CaffeOnSpark Deep Learning Software to Open Source Community
ZDNet (02/26/16) Charlie Osborne

Members of Yahoo!'s Big ML Team on Wednesday announced the availability of the CaffeOnSpark deep-learning software to the open source community for further development. According to Yahoo!, CaffeOnSpark fortifies the Apache Spark open source cluster computing framework, advancing data classification algorithms' ability to use dataframes to mine predictions and models from user-generated data. "We believe that deep learning should be conducted in the same cluster along with existing data-processing pipelines to support feature engineering and traditional (non-deep) machine learning," the team reports. "We created CaffeOnSpark to allow deep-learning training and testing to be embedded into Spark applications." Yahoo! says CaffeOnSpark also removes unwanted data movement and enables the direct execution of deep learning on big data clusters, upgrading the speed and efficiency of such tasks. The software was recently applied to Yahoo!'s Flickr image search service to enhance image-recognition capabilities via training with Hadoop clusters.


Google Wants Less Reliable Hard Disks
InformationWeek (02/25/16) Thomas Claburn

In a research paper published Tuesday at the USENIX File and Storage Technologies (FAST 2016) conference, Google researchers called on academia and industry to work together to adapt hard disk drives to current data center needs. Google wants designs that are more affordable, more error-prone, and better suited to collective operation. Google's Eric Brewer says conventional disks are designed for traditional servers instead of large-scale data centers supporting cloud computing. Brewer expects the rate of video uploading to grow 10-fold every five years, and in order to accommodate this massive amount of data, he argues hard disks should be optimized to function as collections of disks instead of discrete devices associated with a single server. "This shift has a range of interesting consequences, including the counter-intuitive goal of having disks that are actually a little more likely to lose data, as we already have to have that data somewhere else anyway," Brewer says. The Google researchers also say security must be improved as new use cases for storage are considered. They say security must be hardened to prevent unauthorized firmware changes and encryption must be adapted to collections of disks through the support of multiple keys, which would make it easier to secure data from different customers in shared disk space.


Chameleon Adapts to Secure the Cloud
Science Node (02/24/16) Makeda Easter

A minority-led group of researchers from universities in Arkansas, North Carolina, and Louisiana is looking to the U.S. National Science Foundation-funded Chameleon cloud platform as a tool for developing cyberattack prevention. The team, led by University of Arkansas at Pine Bluff (UAPB) graduate student Leonardo Vieira, is developing and testing approaches on cloud computing ecosystems and simulating and visualizing multi-stage intrusion attacks to better understand how hackers can compromise and steal data. The Chameleon architecture operates through 12 standard cloud units, and offers researchers a total of 13,056 cores, 66 tebibytes of random-access memory, and 1.5 petabytes of configurable storage. Data from Chameleon is spread out between the Texas Advanced Computing Center and the University of Chicago via 100 Gbps Internet2 connections so users can study the effects of a distributed cloud. Walker's team models attacks using a core server at a UAPB CyberSecurity Research lab, and another at North Carolina A&T State University's lab. They concurrently run intrusion detection and prevention systems to gain an understanding of how large-scale cyberattacks can be spotted when an intruder is attempting to hide in everyday network traffic. The system also is being used to visualize logs, which document incoming and outgoing network traffic on a network.


A New Algorithm From MIT Could Protect Ships From 'Rogue Waves' at Sea
IDG News Service (02/26/16) Katherine Noyes

Massachusetts Institute of Technology (MIT) researchers have developed a predictive tool that could give ships and their crews a two- to three-minute advanced warning of approaching rogue waves that swell up seemingly out of nowhere and can be eight times higher than the surrounding sea. The tool is based on the observation that waves sometimes cluster in a group, rolling through the ocean together. Certain wave groups end up "focusing" or exchanging energy in a way that eventually leads to a rogue wave. "It's not just bad luck," says MIT professor Themis Sapsis. "It's the dynamics that create this phenomenon." The researchers combined ocean-wave data from measurements taken by ocean buoys with a nonlinear analysis of the underlying water wave equations. They quantified the range of wave possibilities for a given body of water, and developed a simpler and faster way to predict which wave groups will evolve into rogue waves. The resulting tool is based on an algorithm that sifts through the data from surrounding waves. The algorithm factors in a wave group's length and height, and computes the probability the group will turn into a rogue wave within the next few minutes. "It's precise in the sense that it's telling us very accurately the location and the time that this rare event will happen," Sapsis says.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe