Association for Computing Machinery
Welcome to the May 4, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).


Artificial Intelligence Experts Are in High Demand
Wall Street Journal (05/01/15) Amir Mizroch

Demand for artificial intelligence (AI) know-how has exploded in recent years, and major technology firms are turning to the ranks of academia to find that expertise. This demand is being driven by the falling cost of computing power and the need for methods of analyzing the mountains of data being generated every day. Amazon, for example, is advertising for more than 50 AI positions in the U.S. and Europe and is searching for doctorate-holders to fill them. The quest for talent often means poaching academia's best and brightest. The University of Washington (UW), for example, recently lost seven AI-related professors to Google. "Virtually every professor at the UW computer science department has been called many times to work at these companies, and frankly it's a very compelling pitch," says Oren Etzioni, who is on leave from UW's computer science faculty while he runs the Allen Institute for Artificial Intelligence. Many tech firms also are pouring funds into major centers for academic research into AI, endowing professorships and funding research. However, some academicians have complained about the new status quo, citing tech giants' unwillingness to share their mountains of data. "The high value of this work encourages companies like Google to keep their progress more secret," notes Tom Mitchell, head of the computer science department at Carnegie Mellon University.
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 

Vint Cerf on ACM, Internet Issues, and Quantum and Machine Computing
IT World Canada (05/01/15) Stephan Ibaraki

In a wide-ranging interview, Vint Cerf, co-creator of the Internet and vice president at Google, discusses a range of topics, including the modern challenges of the Internet, the technologies of the future, and the Association for Computing Machinery (ACM). Asked what he sees as the main challenges and controversies surrounding the Internet today, Cerf, co-recipient in 2004 of the ACM A.M. Turing Award, identified the need to ensure users' safety, security, and privacy. He also reiterated his frequent warnings about a "digital Dark Age" that could result as software continues to advance and the means of interacting with older software and data falls away. Finally, he pointed to the Internet of Things, particularly the need to ensure the security of all Internet-connected devices. Cerf also commented on a number of speculative topics, saying he thinks the singularity envisioned by Ray Kurzweil is "a stretch," but that he sees a great deal of promise in current research into quantum computing and quantum entanglement. He also comments on the need for professionalism and credentialing in software development and discusses his time as president of ACM. Cerf says ACM's main challenges today are helping to establish 21st century business models, being relevant to computer science practitioners, and helping to promote computer science as a discipline.

"Fingerprinting" Chips to Fight Counterfeiting
MIT News (04/30/15) Rob Matheson

The manufacturing process of silicon chips causes microscopic variations in the chips that are unpredictable, permanent, and essentially impossible to recreate. Massachusetts Institute of Technology (MIT) researchers are using these variations, called physical unclonable functions (PUFs), to "fingerprint" silicon chips used in consumer-product tags to combat product counterfeiting. The fingerprint consists of minute speed differences in a chip's response to electrical signals caused by the PUFs. The MIT researchers assigned manufactured chips sets of 128-bit numbers, which are stored in a database in the cloud. The chips can be scanned by a mobile device that will search the database to determine if the tag is authentic. PUFs are created when wires vary in thickness, and the chemical vapor deposition process creates microscopic bumps. The bumps cause electrons to flow with more or less resistance through different paths of the chip, varying the processing speed. The PUF technology works by "racing" signals through two different paths across the chips. The output is a 1 if one path is faster, and a 0 if the other is faster. Repeating the process with different input signals for each race creates the 128-bit number.

SDSC's 'Comet' Supercomputer Enters Early Operations Phase
UCSD News (CA) (04/30/15) Jan Zverina

Comet, a new petascale supercomputer designed to expand access to scientific computing resources among traditional and non-traditional research domains, has entered an early operations phase at the University of California, San Diego's (UCSD) San Diego Supercomputer Center (SDSC). Comet is a Dell-integrated cluster that features 1,944 compute nodes, each with two 12-core Intel processors. Comet also features 37 graphical-processing unit (GPU) nodes, each with four NVIDIA GPUs and two processors, and will soon have four large-memory nodes, each with four processors and 1.5 terabytes of memory. The new cluster has an overall peak performance of more than two petaflops. SDSC director Michael Norman, the project's principle investigator, says the new cluster is "specifically configured to meet the needs of researchers in domains that have not traditionally relied on supercomputers to solve their problems." SDSC deputy director Richard Moore says the domains include disciplines such as genomics, the social sciences, and economics. Comet is funded by a $21.6-million U.S. National Science Foundation (NSF) grant and, along with SDSC's Gordon supercomputer, is part of NSF's eXtreme Science and Engineering Discovery Environment (XSEDE). Researchers will be able to request allocations on Comet via XSEDE.

Ears, Grips, and Fists Take On Mobile Phone User ID
Phys.Org (04/26/15) Nancy Owano

Yahoo Labs' Bodyprint is a biometric authentication system that could be used to replace PIN codes for smartphones, via its ability to recognize users' biometric features--such as ears, palm grips, and fists--when they are pressed against an off-the-shelf capacitive touchscreen. The researchers deployed Bodyprint on an LG Nexus 5 phone equipped with a Synaptics ClearPad 3350 touch sensor, and engaged 12 participants to test the system for each of five poses. The users held the Nexus 5 phone and conducted 12 trial repetitions, placing the phone on a table between trials. Bodyprint successfully identified users with 99.5-percent accuracy with a false rejection rate of 26.8 percent across all body parts, but as low as 7.8 percent for ear-only authentication. "In the case that future touchscreens support higher input resolutions, up to a point where they may detect the fine structure of fingerprints, Bodyprint will readily incorporate the higher level of detail of sensor data, which will not only extend our approach to further body parts, but likely reduce false rejection rates at the same high levels of authentication precision," the researchers note.

Rehab Robot HARMONY Introduced by UT Austin Engineers
UT News (04/30/15) Ashley Lindstrom

University of Texas at Austin (UT Austin) researchers have developed HARMONY, a first-of-its-kind, two-armed, robotic rehabilitation exoskeleton the researchers say could provide a new method of high-quality, data-driven therapy to patients suffering from spinal and neurological injuries. The researchers say HARMONY's software gives therapists and doctors the ability to deliver precise therapy while tracking and analyzing data. "Not only does the exoskeleton adjust to patient size, it can also be programmed to be gentle or firm based on the individual's therapy needs," says UT Austin researcher Ashish Deshpande. The exoskeleton connects to patients at three places on each side of the upper body and features 14 axes for a wide range of natural motion. The robot is equipped with a series of sensors that collect data at 2,000 times per second. The data are then fed back into the robot's program for personalized robotic interaction. HARMONY could reduce a patient's recovery time because it can adapt to the specific, corrective ways that humans learn, according to the researchers. Now that the exoskeleton is complete, the researchers will continue to develop the software and prepare for an upcoming trial period this summer.

NYU & NVIDIA Team Up for Multi-GPU Cluster-Led Deep Learning Research
Product Design & Development (04/30/15) Jacob Meister

New York University's (NYU) Center for Data Science (CDS) seeks to improve deep-learning applications and algorithms for large-scale graphics processing unit (GPU)-accelerated systems via a partnership with NVIDIA. Diffusing deep learning across many GPUs can boost the size of the models researchers test, and the freedom they have to test them. "Multi-GPU machines are a necessary tool for future progress in [artificial intelligence] and deep learning," says CDS founder Yann LeCun. "Potential applications include self-driving cars, medical image-analysis systems, real-time speech-to-speech translation, and systems that can truly understand natural language and hold dialogues with people." NYU's ScaLeNet computing system will enable researchers to perform more complex tasks, using an eight-node Cirrascale cluster with 64 NVIDIA Tesla K80 dual-GPU accelerators. ScaLeNet will be used by faculty members, research scientists, postdoctoral fellows, and graduate students for numerous projects and educational initiatives. "CDS has research projects that apply machine and deep learning to the physical, life, and social sciences," LeCun notes. "This includes Bayesian models of cosmology and high-energy physics, computational models of the visual and motor cortex, deep-learning systems for medical and biological image analysis, as well as machine-learning models of social behavior and economics."

Deep Learning Machine Solves the Cocktail Party Problem
Technology Review (04/29/15)

University of Surrey researchers have separated human voices from the background in a wide range of songs using some of the latest advances associated with deep neural networks. The researchers say they have solved the cocktail party effect, which is the ability to focus on a specific human voice while filtering out other voices or background noise, a task that has challenged computer engineers. The new method involves a database of 63 songs that are available as a set of individual tracks that each contains a different instrument or voice, as well as the fully mixed version of the song. The researchers divided each track into 20-second segments and created a spectrogram for each showing how the frequencies in the sound vary over time, resulting in a unique fingerprint that identifies the instrument or voice. The researchers then trained a deep convolutional neural network to pick the voice's unique spectrogram from the other spectrograms that were present. The researchers used 50 songs, generating more than 20,000 spectrograms, to train the network while keeping the remaining 13 to test it on. "These results demonstrate that a convolutional deep neural network approach is capable of generalizing voice separation, learned in a musical context, to new musical contexts," the researchers say.

Inspired by Humans, a Robot Takes a Walk in the Grass
Oregon State University News (04/29/15) David Stauth

Oregon State University (OSU) researchers have developed ATRIAS, a bipedal robot they say is the closest machine yet to resemble human locomotion. The human-sized robot has six electric motors powered by a lithium polymer battery. The researchers have improved its energy efficiency over other robots using an elastic leg design and by taking advantage of the energy retention that is natural in animal movement. "This will ultimately allow a much wider range of robotic uses and potential than something which requires larger amounts of energy," says OSU professor Jonathan Hurst. The researchers say ATRIAS already is three times more energy-efficient than other human-sized bipedal robots. "This is part of a continuous march toward running robots that are going to be useful and practical in the real world," says OSU researcher Christian Hubicki. The robot is based on research into how animals move so effectively. Animals can combine sensory inputs from nerves, vision, muscles, and tendons to enable a level of locomotion that scientists are still trying to replicate. The research could be used to develop prosthetic limbs, or machines that can move around in places that are too dangerous for people.

Enron Becomes Unlikely Data Source for Computer Science Researchers
NCSU News (04/29/15) Matt Shipman

North Carolina State University (NCSU) researchers have turned to unlikely sources for assembling huge collections of spreadsheets that can be used to study how people use the software, in an attempt to make them more useful. "We study spreadsheets because spreadsheet software is used to track everything from corporate earnings to employee benefits, and even simple errors can cost organizations millions of dollars," says NCSU professor Emerson Murphy-Hill. The researchers are making two new collections, with more than 250,000 spreadsheets combined, available to the public. "Our focus is on how users interact with spreadsheets, and these spreadsheets actually tell us a lot about how users represent and manipulate data," Murphy-Hill says. One of the collections consists of 15,000 spreadsheets taken from internal Enron emails, which were made public after the emails were subpoenaed by prosecutors. The other collection, called Fuse, was created using an NCSU-developed technique to identify and extract spreadsheets from an online archive of more than 5 billion Web pages. "Fuse used cloud infrastructure to search through billions of Web pages to identify and extract the spreadsheets we write about in this paper," says NCSU researcher Titus Barik.

Cellular Sensing Platform Supports Next-Generation Bioscience and Biotech Applications
Georgia Tech News Center (04/30/15) John Toon

Georgia Institute of Technology (Georgia Tech) researchers have developed a cellular sensing platform that could expand the use of semiconductor technology in future bioscience and biotech applications. The platform is arranged in a standard low-cost complementary metal-oxide semiconductor (CMOS) process, and the researchers say each sensor pixel can concurrently monitor multiple different physiological parameters of the same cell and tissue samples to achieve holistic and real-time physiological characterizations. "Fully understanding the physiological behaviors of living cells or tissues is a prerequisite to further advance the frontiers of bioscience and biotechnology," says Georgia Tech professor Hua Wang. CMOS sensor array chips can provide built-in computation circuits for in-situ signal processing and sensor fusion on multi-modality sensor data. In addition, the chips eliminate the need for external electronic equipment and enable their use in general biology labs without dedicated electronic or optical setups. The researchers note thousands of these sensor array chips can operate in parallel to achieve high-throughput scanning of chemicals or drug candidates, which represents a major improvement over sequential scanning through limited fluorescent scanners. "Georgia Tech's research combines semiconductor integrated circuits and living cells to create an electronics-biology hybrid platform, which has tremendous societal and technological implications that can potentially lead to better and cheaper healthcare solutions," says the Semiconductor Research Corporation's Victor Zhirnov.

The Billion-Dollar Race to Reinvent the Computer Chip
Scientific American (05/15) Vol. 312, No. 5, P. 58 John Pavlus

Chipmakers are spending billions of dollars to research and develop fundamentally new computing architectures and processor designs as the ability to build more and more transistors into a chip inevitably approaches its physical limit. Hewlett-Packard, for example, has constructed a prototype computer, known as the Machine, that incorporates memristors, which enable the integration of storage and random-access memory functionality. This combination promises to significantly boost efficiency and performance, and mitigate the von Neumann bottleneck. Meanwhile, IBM is exploring post-silicon computing, using graphene and carbon nanotubes as possible materials. Graphene transistors have been proven to upgrade computing speed exponentially in comparison to silicon devices, and at reasonable power density; however, they cannot reliably encode digital logic. Graphene sheets rolled into carbon nanotubes have silicon-like semiconducting properties, but their delicacy can be a disadvantage. Another IBM project is the TrueNorth chip, a device with more than 5 billion transistors configured to model 1 million neurons and 256 million synaptic links, so it can emulate cortical columns in the mammalian brain with no bus bottlenecking the connection. Some researchers believe the general-purpose model of computation will be succeeded by a specialized approach that can give cars, network routers, and other formerly "dumb" objects and systems the semiautonomous flexibility and context-specific proficiency of domestic animals.
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 

Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.

To submit feedback about ACM TechNews, contact:
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe