Association for Computing Machinery
Welcome to the June 29, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Please note: In observance of the Independence Day holiday, TechNews will not be published on Friday, July 1 and Monday, July 4. Publication will resume Wednesday, July 6.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Obama Administration Releases $150M in Grants for TechHire
eWeek (06/28/16) Chris Preimesberger

The Obama administration on Monday issued $150 million in U.S. Department of Labor grants for 39 technology-related partnerships in 25 states and Washington, D.C. Awardees will use this money to launch innovative training and placement models to foster tech talent as a way to retain and generate jobs in local economies. President Barack Obama emphasizes the grants will enable more communities to broaden their own local tech sectors, while the White House cites a large and expanding unmet demand for tech workers. This demand can be addressed via advancements in training and hiring using programs such as "coding bootcamps." Last year Obama established TechHire, a multi-sector initiative and call to action for cities, states, and rural regions to work with employees so coding bootcamps and other new tech training opportunities can be designed and implemented in a few months. Common practices in the TechHire partnerships include expanding access to accelerated learning programs that provide a rapid path to jobs. Others include stressing inclusion by exploiting high demand for tech jobs and new training and hiring strategies to enhance access to tech jobs for all citizens. TechHire partnerships also seek to apply data and innovative hiring practices toward the expansion of openness to non-traditional hiring by working with employers to build robust data on where they have the greatest demand.


Google Fellow Talks Neural Nets, Deep Learning
EE Times (06/28/16) Jessica Lipsky

In his keynote address at the 2016 ACM SIGMOD/PODS Conference in San Francisco, Google senior fellow Jeff Dean on Tuesday discussed the pressing need for machine-learning (ML) systems to derive meaning from vast datasets and computation resources. "We can store tons of interesting data but what we really want is understanding about that data," he says. Dean says ML's progress at Google includes developing accelerator chips for artificial intelligence known as tensor processing units after the open source TensorFlow algorithms it issued in 2015. "This has led to really incredible growth in use of the technology across hundreds of teams at Google," he notes. Dean cites the Google speech-recognition team's use of neural networks to cut word errors by 30 percent, and their application of the networks to replace the acoustic model of its speech recognition pipeline to realize "the biggest single improvement in two decades." Dean also says the abundant parallelism in the speech-recognition models can be harnessed to tackle other challenges, such as translating signs into a different language in real time via pixel identification. Obstacles ML and neural networks still face include the need for models to learn unsupervised, engage in multitasking and transfer learning, and undergo reinforcement learning. Dean says using "high-level descriptions of machine-learning computations and [mapping] these efficiently onto wide variety of different hardware" are the next challenge from a systems perspective.


Music of the (Data) Spheres
MIT News (06/28/16) Margaret Evans

A new sonification platform called Quantizer streams music based on real-time data drawn from the Large Hadron Collider (LHC), the world's largest particle accelerator. Massachusetts Institute of Technology (MIT) researchers were granted access to a live feed of particle collisions at the LHC, which were then routed to Quantizer's sonification engine to convert the data into sounds, notes, and rhythms. The platform is programmed to select which range of energy readings to sonify from different detector subsystems. Quantizer scales the data to audible levels and assigns output to musical scales or other sonic parameters not tied to musical conventions. Users can access the stream on the platform's website and listen to three "genres" of the data feed--cosmic, suitar samba, and house. MIT researcher Juliana Cherston says the project will depend upon collaborations with composers to improve audio-mapping features and other parameters. "The sonification engine allows composers to select the most interesting data to sonify, and provides both default tools for producing more structured audio as well as the option for composers to build their own synthesizers compatible with the processed data streams," Cherston notes. Previous musical interpretations of physics data exist, but the Quantizer team believes theirs is the first to run in real time with support for different compositions.


Bill on Research Policy Seeks to Engage Women, Minorities
FedScoop (06/28/16) Samantha Ehlinger

The U.S. Senate Committee on Commerce, Science, and Transportation on Wednesday will mark up a science policy bill aiming to address a lack of diversity in the science, technology, engineering, and mathematics (STEM) workforce. The committee reports a substitute amendment to the bill presented on Tuesday features technical corrections and a 4-percent boost in authorization for Fiscal Year 2018 for the U.S. National Science Foundation and the National Institute of Standards and Technology (NIST). Association for Women in Science CEO Janet Bandows Koster says the measure's concentration on retaining women in STEM professions is new and sorely required. "This is the first time that we've really seen anything that addresses retention," she notes. Bandows Koster also says although some male and female STEM course graduate numbers are comparable, those women are not entering the STEM workforce. The Senate Bill instructs NIST and the National Academy of Sciences to jointly set up a postdoctoral fellowship program, while focusing on including more underrepresented minorities. The bill also makes accommodations for a program to award grants for activities that would increase women and underrepresented groups' STEM participation, such as online workshops, mentoring programs, internships, outreach programs to elementary and middle schools, or faculty recruitment. The House passed their version of the bill last year.


Beyond Video Games: New Artificial Intelligence Beats Tactical Experts in Combat Simulation
UC Magazine (06/27/16) M.B. Reilly

The ALPHA artificial intelligence (AI) created by a University of Cincinnati doctoral graduate is a milestone in the use of genetic-fuzzy systems with specific implementation in unmanned combat aerial vehicles (UCAVs) in simulated air-combat missions. ALPHA's programming involved deconstructing the challenges of aerial fighter deployment into sub-decisions consisting of high-level tactics, firing, evasion, and defensiveness. The language-based fuzzy-logic algorithms cover a multitude of variables, and ease the instilling of expert knowledge to the AI; ALPHA's programming also can be generationally improved. The earliest version of ALPHA consistently beat other AI opponents used by the U.S. Air Force Research Laboratory for research purposes. Subsequent matches against a more mature iteration by a human opponent also proved the AI's invincibility, as retired U.S. Air Force Colonel Gene Lee could not defeat ALPHA, and was consistently bested by the program during protracted clashes in a flight simulator. ALPHA also has repeatedly beaten other experts, even in conditions when the UCAVs it controls are deliberately impaired. ALPHA is so fast it could consider and coordinate the optimal tactical plan and precise responses within a dynamic setting more than 250 times faster than human adversaries can blink. Experts say this breakthrough furthers the probability that AI-controlled UCAVs will serve as wingmen for manned aircraft on combat missions.


Software for fMRI Yield Erroneous Results
Linkoping University (06/28/16) Monica Westman Svenselius

Researchers at Linkoping University (LiU) and the University of Warwick have shown that common statistical methods used to analyze brain activity through images taken with magnetic resonance imaging scanners cannot be trusted. The researchers tested the analysis methods by using them on known, reliable data, and found the methods showed false activity in the brain in 60 percent of the cases. The statistical methods are built on several assumptions, and if just one of these assumptions is incorrect, the results also will be incorrect. The researchers have proposed another method in which few assumptions are made and 1,000 times more calculations are done, which produces a significantly more certain result. In addition, modern graphics cards enable the processing time to be reduced so the method is more practical. "If you've spent months gathering data at great cost, you should be more interested in letting the analysis take time so that it's correct," says LiU researcher Anders Eklund. He notes although some studies may need to be redone, the most important aspect of this research is that it shows researchers need to think about what method they use in the future.


Harmonized Security Across Devices
CORDIS News (06/27/16)

The European Union-funded SECURity at the network EDge (SECURED) project is focused on the design of a solution to offload the execution of security applications into a programmable device at the network's fringes. Project coordinator and Polytechnic University of Turin professor Antonio Lioy says the initiative aims to delegate security, wholly or partially, to a trusted and secure network node developed to run security applications selected by the user and configured according to a user-specified protection policy. "When the user connects his mobile device to the [Network Edge Device (NED)], a proof of the identity and integrity status of the NED is provided so that the user can trust the NED to work on his behalf, and a dedicated virtual execution domain is created for the user," Lioy says. "The NED will then download the selected security controls from user-specified repositories and will configure them according to his/her protection profile retrieved from a policy repository." Lioy says the solution is compatible with existing networks because NEDs can be inserted within the network, and the users can be asked to link via a virtual private network to their chosen NED. Lioy notes the SECURED model is suitable for the Internet of Things (IoT) by using a NED for connect IoT nodes to the external network.


Computer Model Demonstrates How Human Spleen Filters Blood
CMU News (06/27/16) Joceyn Duffy

A study led by researchers at Carnegie Mellon University (CMU) and the Massachusetts Institute of Technology provides new insights into how the spleen filters blood and determines the geometric characteristics of red blood cells. A healthy spleen's filtration process depends upon red blood cells being able to fit through the organ's interendothelial slits, which are no larger than 1.2 micrometers tall and 4 micrometers wide, about 5 percent of the thickness of a human hair. The process cannot be observed in the body because of the slits' size, so researchers developed a computer simulation model enabling them to determine the range of red blood cells that could pass through the slits. The study found only healthy cells were the right size to be filtered by the spleen. The researchers say their results could provide fundamental insights into diagnostics and drug treatments for malaria, anemia, and other diseases that affect the shape of red blood cells. "The computational and analytical models from this work, along with a variety of experimental observations, point to a more detailed picture of how the physiology of human spleen likely influences several key geometrical characteristics of red blood cells," says CMU president Subra Suresh.


New, Better Way to Build Circuits for World's First Useful Quantum Computers
Penn State News (06/25/16) Barbara K. Kennedy

Researchers led by Pennsylvania State University professor David S. Weiss performed a specific single quantum operation on individual atoms in a P-S-U pattern on three separate planes stacked within a cube-shaped configuration. The team then utilized crossed laser light beams to selectively sweep away the atoms that were not targeted for that operation, and produced photos of the results by successively focusing on each of the cube's planes. The pictures are the sum of 20 implementations of this process, and they display bright spots where the atoms are in focus and fuzzy spots if they are out of focus in an adjacent plane. The images demonstrate both the success of the method and the comparatively small number of targeting errors. Weiss says the technique highlights the potential for using atoms as the building blocks of circuits in future quantum computers. "Our result is one of the many important developments that still are needed on the way to achieving quantum computers that will be useful for doing computations that are impossible to do today, with applications in cryptography for electronic data security and other computing-intensive fields," Weiss says. He notes a future research goal is to coax the quantum bits to "have entangled quantum wave functions where the state of one particle is implicitly correlated with the state of the other particles around it."


Computer Sketches Set to Make Online Shopping Much Easier
Queen Mary, University of London (06/24/16) Neha Okhandiar

Researchers at Queen Mary University of London (QMUL) say they have developed software that recognizes sketches and could help consumers shop more efficiently. They say the proliferation of touchscreens enables more users to effectively sketch accurate depictions of objects, a new technique that could be more effective than text-based or photo searches. "What's great about our system is that the user doesn't have to be an artist for the sketch to be accurate, yet is able to retrieve images in a more precise manner than text," says QMUL researcher Yi-Zhe Song. The program, called fine-grained sketch-based image retrieval (SBIR), overcomes problems with using text to describe visual objects in words, especially when dealing with precise details, and with using photos, which can make the search far too narrow. "For the first time users are presented with a system that can perform fine-grained retrieval of images--this leap forward unlocks the commercial adaptation of image search for companies," says QMUL's Timothy Hospedales. SBIR is designed to mimic the human brain's processing through arrays of simulated neurons. The system was trained to match sketches to photos based on about 30,000 sketch-photo comparisons, learning how to interpret details of photos and how people try to depict them in hand drawing.


Securely Resetting Passwords
Ruhr-University Bochum (Germany) (06/23/16) Raffalea Romer

Researchers at Ruhr-University Bochum (RUB) and the University of California, Berkeley say they have developed a method for securely resetting passwords. Normally when a password is lost, the user will be sent a new one via email, or will have to provide a correct answer to a security question. However, emails are usually unencrypted and can be intercepted, and security questions can be guessed or answered with some research. The new method uses Mooney images, a term referring to black-and-white images that were edited using a special filter. At first glance, it is impossible to tell what a Mooney image is showing; only after viewing the original picture will a user be able to recognize the motive. Instead of coming up with a security question and answer to prepare for a lost password, the user is presented 10 Mooney images and the respective original pictures during the priming phase; if the password is forgotten, the user will be shown 20 Mooney images and will have to state which ones are recognized. "The true account holder will recognize the 10 Mooney images for which he had been primed, but he won't be able to identify the other 10," says RUB professor Markus Durmuth. Meanwhile, a hacker would not recognize the correct set of Mooney images, and therefore would not have access to a new password.


Computer Vision System Studies Word Use to Recognize Objects It Has Never Seen Before
EurekAlert (06/23/16) Jennifer Liu

Computers can learn to recognize objects they have never seen before, based in part on studying vocabulary, according to Disney Research. Just as people can get an idea of what something looks like based on reading a description of it, so too can a computer that already has been taught to recognize certain objects. For example, a computer that has learned to recognize an apple can analyze word use to get hints about the existence of other fruits such as pears and peaches, and how they might differ from apples, according to Disney's Leonid Sigal. In addition, the knowledge that other fruits exist can be helpful in teaching the computer about important characteristics of apples themselves. By reducing the need to train vision systems with thousands of labeled images, it could help minimize the time necessary for computers to learn new objects and expand the number of object categories that computers can recognize. "Vocabulary-informed learning promises to break that bottleneck and make computer vision more useful and reliable," says Disney Research's Jessica Hodgins. As part of the study, the computer learned its vocabulary by being trained against all of the articles in Wikipedia and UMBC WebBase, a dataset with 3 billion English words. The computer used those articles to develop more than 300,000 object categories and determine statistical associations between them.


Genetic Algorithms Can Improve Quantum Simulations
Phys.org (06/23/16) Lisa Zyga

Researchers at the University of the Basque Country have applied genetic algorithms to digital quantum simulations, demonstrating they can reduce quantum errors and outperform existing optimization techniques. A major challenge of quantum simulation is information loss due to decoherence, which occurs when a quantum system interacts with its environment. To protect quantum simulation against this loss, researchers use quantum error-correction protocols, which store information in entangled states of multiple quantum bits (qubits) using quantum gates. Optimization techniques are then used to analyze the gate arrangement and find the architecture that minimizes the error. The researchers showed genetic algorithms can identify gate designs for digital quantum simulations that outperform designs identified by standard optimization techniques, resulting in the lowest levels of digital quantum errors ever achieved. In addition, genetic algorithms can reduce the digital errors created by the reduced number of steps used for approximating the algorithms, as well as another type of error that arises from the imperfections in the construction of each of the gates. The genetic algorithms perform so well because of their adaptability. "Their adaptability allows for a flexible and clever technique to solve different problems in different quantum technologies and platforms," says University of the Basque Country professor Enrique Solano.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe