Association for Computing Machinery
Welcome to the October 7, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Can We Open the Black Box of AI?
Nature (10/05/16) Davide Castelvecchi

Knowing precisely how deep-learning artificial intelligence (AI) functions is key to trusting the insights it yields, and scientists are attempting to do this as neural-network algorithms become more complex. When given a training set of data accompanied by the right answers, a neural network can progressively enhance its performance by modifying the strength of each connection until its top-level outputs are also right, eventually producing a network that can successfully classify new data that was not included in its training set. However, this leads to diffuse information encoding in the network. "Even though we make these networks, we are no closer to understanding them than we are a human brain," says University of Wyoming researcher Jeff Clune. His team learned two years ago that neural networks can be easily fooled with images that to people resemble random noise or abstract geometric imagery, using methods to maximize the response of any neuron instead of only top-level ones. The danger this presents has prompted researchers such as Zoubin Ghahramani at the U.K.'s University of Cambridge to create non-deep-learning AIs that emphasize functional transparency. However, computer scientists say such research should complement, and not replace, deep learning.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


'Virtual Physiotherapist' Helps Paralyzed Patients Exercise Using Computer Games
Imperial College London (10/06/16) Francesca Davenport; Martin Sayers; Thomas Angus

Researchers at the U.K.'s Imperial College London recently demonstrated the gripAble device, which enables paralyzed stroke patients to play computer games, boosted the proportion of patients able to direct movements on a PC tablet screen by 50 percent versus standard methods. The device features a lightweight electronic handgrip that wirelessly interacts with the tablet to enable users to play arm-training games. Users squeeze, turn, or lift the handgrip, which vibrates in response to their gameplay. The researchers note the device can detect the minuscule flicker movements of severely paralyzed patients and channel them into controlling the game. They say gripAble enabled more than half of severely disabled patients to engage with arm-training software, whereas none of the patients could employ conventional control methods such as swiping and tapping on tablets and smartphones. "The use of mobile gaming could provide a cost-effective and easily available means to improve the arm movements of stroke patients, but in order to be effective patients of all levels of disability should be able to access it," says Imperial College researcher Paul Bentley. He notes the low-cost gripAble device will be developed further "so we can help more patients who are currently suffering from the effects of poor arm and upper body mobility."


Your Next Nurse Could Be a Robot
ScienceDaily (10/05/16)

Researchers from Italy's Polytechnic University of Milan led an international team that trained a robot to imitate natural human actions. The work demonstrates humans and robots can effectively coordinate their actions during high-stakes events such as surgeries. Over time, the research could lead to improvements in safety during medical procedures because robots do not tire and can complete an endless series of precise movements. Robotic co-workers "will just allow us to decrease workload and achieve better performances in several tasks, from medicine to industrial applications," says Polytechnic University of Milan's Elena De Momi. As part of the experiment, the researchers photographed a human being conducting numerous reaching motions, in a way similar to handing instruments to a surgeon. The photographs were inputted into the neural network of the robotic arm. A human operator then guided the robotic arm in mimicking the reaching motions initially performed by the human subject. Finally, several humans observed the robotic arm making numerous motions, and determined about 70 percent of the movements were "biologically inspired."


Basic Common Sense Is Key to Building More Intelligent Machines
New Scientist (10/05/16) Sally Adee

A key problem with artificial intelligence (AI) is computers' inability to obtain meaningful knowledge beyond the problem they are set, even though they can learn without human guidance. Imperial College London researchers in the U.K. are working to circumvent this problem via symbolic AI, in combination with contemporary machine learning. Symbolic AI involves describing or labeling everything for the AI, and the labor this originally entailed has been greatly mitigated with neural networks, says Imperial College professor Murray Shanahan. By linking symbolic AI with neural networks' autonomous learning, he hopes to effect a knowledge transfer and enable fast learning that demands less data about the world. Working with Imperial College's Marta Garnelo, Shanahan has built a hybrid architecture combining neural networks' ability to interpret the world independently with basic assumptions about humans' understanding of the world so it can construct rudimentary common sense. Shanahan says these common-sense rules enable the system to handle unexpected situations that neural networks cannot handle. Testing the hybrid AI on a simple board game proved it could transfer its acquired knowledge across games. Researchers say the architecture could potentially enable machines to render their representations as reusable symbols, a step toward artificial general intelligence.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


The Google Lab That's Building a Legion of Diverse Coders
Wired (10/06/16) Davey Alba

Google has launched Code Next, a community computer laboratory in Oakland, CA, as part of an initiative to introduce black and Latino students to coding and technology careers. The project focuses on cultivating next-generation computer scientists from underrepresented communities. Blacks and Latinos comprise about 7 percent and 8 percent of the technology sector, despite being 12 percent and 16 percent of the U.S. population, respectively. About half of all black and Latino students lack access to computer science education, and few students who do not receive computer training early on will pursue careers in the field. As the U.S. Bureau of Labor Statistics predicts the industry will have 1 million more computer science jobs than people to fill them by 2020, companies such as Google have a vested interest in encouraging potential talent. The Oakland lab's pilot program began last January with 70 ninth-graders visiting the lab twice a week through June. The curriculum is open-ended and includes coding, game development, and three-dimensional modeling. Google joins other organizations working in the region to diversify the tech industry, including Hack the Hood, The Hidden Genius Project, and Black Girls Code. Meanwhile, a second Code Next lab is slated to open in Harlem, NY, next year.


Monitoring Parkinson's Symptoms at Home
MIT News (10/05/16) Helen Knight

Researchers at the Massachusetts Institute of Technology (MIT) Research Laboratory of Electronics and elsewhere have developed a method to monitor the progression of Parkinson's disease as subjects interact with a computer keyboard to perform ordinary tasks. "This approach...does not add any additional burden or take time away from daily activities," says MIT researcher Luca Giancardo. Current methods to evaluate the severity of Parkinson's symptoms are usually limited to a clinical setting, so the investigators explored using keystroke dynamics to track the disease's motor effects on patients at home. Three Spanish clinics requested 42 patients with early-stage Parkinson's and 43 healthy subjects to type out a text of their choosing for 10 to 15 minutes on a computer, which had installed software that measured the timing of each press and release on the keyboard. Giancardo says an analysis of the data showed significant variation in the timing of each press and release in patients with early-stage Parkinson's, while that of the control group was more uniform. Studying this variation helped identify a signature to detect the disease in the cohort while preserving patient privacy by not monitoring the typed words. The researchers say it might be possible to use the same approach to produce algorithms that identify symptoms of other motor-based or neurological disorders.


Australia Says Its First Self-Driving Car Can 'Navigate Without Driver Input'
TechRepublic (10/05/16) Hope Reese

An autonomous automobile developed by German manufacturer Bosch in collaboration with the Australian government will be tested on Australia's public roads this week. The cars, which employ video cameras, radar, LiDAR, and global-positioning systems to sense their environment, reportedly are designed to navigate roads with or without driver input. University of Southern California professor Jeffrey Miller suggests the six LiDAR sensors in the vehicle indicate Bosch depends more on LiDAR than on cameras for situational awareness. Carnegie Mellon University's John Dolan says the sensors are "fairly standard for research-level autonomous cars," although he notes the stereo cameras are less common. Miller and other experts agree it is problematic to assess how advanced the cars are without specifics on how many test miles they have driven, and it also remains uncertain whether they are superior to other models currently on the road. University of South Carolina School of Law professor Bryant Walker Smith cites the fact that claims of the technology's superiority are less distinctive than the caveats, especially considering a person is still needed for possible supervision or intervention. Smith also stresses the need to use proper terminology to talk about the vehicle while keeping the facts apart from the hype.


'Atomic Sandwiches' Could Make Computers 100X Greener
University of Michigan News (10/04/16) Gabe Cherry

University of Michigan (U-M) researchers say they have engineered a new material that could enable computing devices to pack more computing power while consuming a fraction of the energy of today's electronics. Known as a magnetoelectric multiferroic material, it combines electrical and magnetic properties at room temperature and relies on a phenomenon called "planar rumpling." U-M researchers started with thin, atomically precise films of hexagonal lutetium iron oxide, and then used a technique called molecular-beam epitaxy to add one extra monolayer of iron oxide to every 10 atomic repeats of their single-single monolayer pattern. The new material sandwiches together individual layers of atoms, producing a thin film with magnetic polarity that can be flipped from positive to negative or vice versa with small pulses of electricity. The researchers say in the future, device-makers could use this property to store digital 0s and 1s. "Before this work, there was only one other room-temperature multiferroic whose magnetic properties could be controlled by electricity," says U-M professor John Heron. He says a viable multiferroic device is likely several years away.


'Security Fatigue' Can Cause Computer Users to Feel Hopeless and Act Recklessly, New Study Suggests
NIST News (10/04/16) Jennifer Huergo; Evelyn A. Brown

Most computer users are so weary of following myriad procedures to keep their systems secure that they tend to engage in risky computing behavior on the job and in their everyday lives, according to a study from the U.S. National Institute of Standards and Technology (NIST). The study "is critical because so many people bank online, and since healthcare and other valuable information is being moved to the Internet," says cognitive psychologist and study co-author Brian Stanton. "If people can't use security, they are not going to, and then we and our nation won't be secure." Computer scientist Mary Theofanos notes the data culled from interviews with subjects pointed to an "overwhelming feeling of weariness." She says having to remember 25 or 30 online passwords at work is now typical, and how this affects people is a factor few researchers consider. NIST found people suffering from security fatigue are more likely to feel they are not in control; this can lead to decision avoidance, impulsive behavior, and lax compliance with security rules. The study suggests fatigue could be mitigated by limiting the number of security decisions users must make, simplifying their ability to choose the correct security action, and designing for consistent decision-making whenever possible.


Google's Next Big Step for AI: Getting Robots to Teach Each Other New Skills
ZDNet (10/04/16) Liam Tung

Researchers at Google are experimenting with how robots can share their experiences and teach each other basic skills. Google Research, DeepMind, Google's U.K.-based artificial intelligence (AI) lab, and Google X are exploring three approaches to accelerate skills acquisition in robots. Multiple robots were first tasked with opening a door using reinforcement learning; each robot was connected to a central server that used the robots' individual actions and outcomes to build a better, shared neural network. Twenty minutes into training, the robotic arms struggled to grasp the handle but eventually opened the door. Within three hours, the robots were able to easily reach for the handle and pull open the door. In a second experiment, the robots developed an understanding of cause and effect by pushing objects around a table and observing how things move in response to specific movements of theirs; the machines were then able to share their collective past experiences to predict the outcome of a certain action. Finally, robots were tested on their ability to learn from human guidance. Each robot was moved by researchers through the steps to open a door, and these actions were encoded into a shared neural network, enabling the robots to improve at the task within hours.


Fujitsu Memory Tech Speeds Up Deep-Learning AI
IEEE Spectrum (10/04/16) Jeremy Hsu

Japan-based Fujitsu has developed an approach to accelerate parallel computing driven by deep-learning neural network algorithms, enlarging the networks that can fit on a single chip. The method trimmed the amount of internal graphics-processing unit (GPU) memory needed for neural network calculations by 40 percent via an efficiency shortcut, says Yasumoto Tomita with Fujitsu Laboratories' Next-Generation Computer Systems Project. Tomita says Fujitsu determined how to reuse certain segments of the GPU's memory by calculating intermediate error data from weighted data and producing weighted error data from intermediate data, independently but simultaneously. Tomita estimates the 40-percent memory usage reduction lets a larger neural network with "roughly two times more layers or neurons" run on one GPU. He notes this method avoids some of the performance bottlenecks that occur when neural networks diffused across numerous GPUs must share data during training. In addition, Fujitsu is developing software to expedite data exchange across multiple GPUs, which could be merged with the memory-efficiency technology to advance the company's deep-learning capabilities. "By combining the memory-efficiency technology...with GPU parallelization technology, fast learning on large-scale networks becomes possible, without model parallelization," Tomita says.


Turning to the Brain to Reboot Computing
Sandia National Laboratories (10/03/16) Mollie Rappe

As computer chips approach their physical performance limits, scientists are seeking solutions to surpass this barrier. Sandia National Laboratories researchers will present three papers at this month's IEEE International Conference on Rebooting Computing in San Diego, CA, to spotlight non-traditional neural computing applications. The idea is to extend neural algorithms so they absorb rigor and predictability, which demonstrates they may have a role to play in high-performance scientific computing. With most machine-learning algorithms possessing a learning phase and a separate testing and operation phase, one paper suggests continual learning and tapping game theory to bring precision to the decision of when an algorithm should learn. The second paper argues for computing using dynamical systems. "The idea behind using dynamical systems for computation is to build a machine such that its dynamics--which has to do with the structure of the machine or the structure of the math--will lead it to the answer based on feeding it the question," says study author Fred Rothganger. The third paper highlights three algorithms that use the careful configuration of spiking neuron-like nodes to execute precise computations, which co-author William Severa says "can push the envelope of what [one] can expect a neural network to do."


Gone Phishin': CyLab Exposes How Our Ability to Spot Phishing Emails Is Far From Perfect
Carnegie Mellon University (10/03/16) Daniel Tkacik

A new study from Carnegie Mellon University's CyLab Security and Privacy Institute examines the extent to which people can spot phishing emails, which CyLab researcher Casey Canfield describes as "poor enough to jeopardize computer systems." Canfield and colleagues showed a set of participants information about phishing before asking them to assess 38 separate emails, half of which were phishing attempts. On average, participants were able to accurately identify slightly more than 50 percent of the phishing emails presented to them, although approximately 75 percent of the phishing links were left un-clicked. Canfield attributes some users' ability to correctly identify most phishing emails to their own biased thinking that all emails constituted an attack, "So they didn't necessarily have a high ability to tell the difference between phishing and legitimate emails," she says. The study authors suggest interventions such as providing users with feedback on their abilities and stressing the consequences of phishing attacks. Canfield says one effective education method companies commonly use involves sending out fake phishing emails and teaching users about such scams if they open the email. She notes this "embedded training" method was originally developed by the CyLab Usable Privacy and Security Lab.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]

Unsubscribe