Association for Computing Machinery
Welcome to the December 16, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Deep-Learning Machine Listens to Bach, Then Writes Its Own Music in the Same Style
Technology Review (12/14/16)

Gaetan Hadjeres and Francois Pachet at the Sony Computer Science Laboratories in France have created a deep-learning neural network that has learned to produce choral cantatas in the manner of Johann Sebastian Bach. To train the DeepBach network, the researchers compiled 352 Bach chorales and transposed them to other keys within a predefined vocal range, yielding a set of 2,503 chorales. The network was trained on 80 percent of the dataset to identify Bach harmonies, while the other 20 percent was used for validation. DeepBach then generated its own harmonies in the same style. Hadjeres and Pachet tested the algorithm by providing a melody the machine used to produce harmonies for the alto, tenor, and bass. The researchers say DeepBach can fool human experts into thinking the pieces are the actual work of Bach about 50 percent of the time. "We consider this to be a good score knowing the complexity of Bach's compositions," Hadjeres and Pachet note. They say the method can be applied not only to Bach chorales, but also to a broad spectrum of polyphonic chorale music.


California to Regulate Energy Use of Desktop Computers and Monitors
The New York Times (12/15/16) Tatiana Schlossberg

The California Energy Commission on Wednesday agreed to pass new regulations for energy efficiency in desktop computers and monitors; the rules are the U.S.'s first attempt to regulate the energy use of desktop computers. The standards would cut carbon dioxide emissions by about 730,000 tons, less than 1 percent of total statewide emissions, and save consumers about $370 million on electric bills a year. The measures include improving the devices' power supply so they save energy when operational. The energy commission forecasts the standards will save about as much electricity as 350,000 households use in 12 months. State officials say the new rules mark an important step in California's role to combat climate change, with Gov. Jerry Brown vowing to slash the state's emissions 40 percent below 1990 levels by 2030. California is the most populous state, which means the new standards could become a model for the entire U.S., and perhaps the global market. About 6 percent of desktops and 14 percent of monitors currently comply with the standards, and the commission expects the eventual replacement of all currently used computers and monitors in California. National adoption would lead to average savings of about $3 billion on consumers' electricity bills, and would eliminate about 14 million metric tons of carbon pollution yearly, according to the Natural Resources Defense Council.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Fast Track Control Accelerates Switching of Quantum Bits
McGill Newsroom (12/15/16) Chris Chipello

Researchers at the University of Chicago, Argonne National Laboratory, McGill University, and Germany's University of Konstanz have demonstrated a new architecture for faster control of a quantum bit, using a single electron in a diamond chip. Their research could potentially lead to quantum devices that can run at high speeds while incurring fewer errors. "To accurately change the state of a quantum particle at high speeds, you need to design the right track to impart the right forces," says McGill professor Aashish Clerk. He and several McGill doctoral fellows theorized faster quantum dynamics could be enabled via the absorption of detrimental accelerations felt by the quantum particle. Constructing the quantum fast track was accomplished by firing synchronized laser pulses on single electrons corralled in defects within their diamond chips. "We demonstrated that these new protocols could flip the state of a quantum bit, from 'off' to 'on,' 300 percent faster than conventional methods," says University of Chicago professor David Awschalom. "Shaving every nanosecond from the operation time is essential to reduce the impact of quantum decoherence." University of Konstanz professor Guido Burkard says the method shows promise outside of the laboratory because it remains functional even when the system is not perfectly isolated.


Making Big Data Manageable
MIT News (12/14/16) Larry Hardesty

Researchers from the Massachusetts Institute of Technology's (MIT) Computer Science and Artificial Intelligence Laboratory and the University of Haifa in Israel presented a new coreset-generation technique for handling big data at the Neural Information Processing Systems (NIPS 2016) conference in Barcelona, Spain. The technique, which works with sparse data and uses a merge-and-reduce procedure, examines every data point in a huge dataset, but it remains computationally efficient because it deals with only small collections of points at a time. The researchers say the technique is useful for tools such as singular-value decomposition, principal-component analysis, and nonnegative matrix factorization. They note for applications involving an array of common dimension-reduction tools, the method provides a very good approximation of the full dataset. The researchers say the technique could be used to winnow a dataset with millions of variables to just thousands. The approach is tailored to data analysis tools with applications in natural-language processing, computer vision, signal processing, recommendation systems, weather prediction, finance, and neuroscience.


This Flying Robot Is the Newest Expert Inspecting Your City's Bridges
Carnegie Mellon University (12/14/16)

Researchers at Carnegie Mellon University (CMU) and Northeastern University have developed the Aerial Robotic Infrastructure Analyst (ARIA), a drone that uses photo- and video-capture techniques, and state-of-the-art laser scanners, to create a high-resolution three-dimensional (3D) model of a bridge, which can then be analyzed by an inspector on the ground. "Using drones to scan bridges for structural problems could provide data on the conditions of the bridge without putting people in high-risk situations," says CMU professor Burcu Akinci. However, ARIA is designed to be more than just a means of data gathering; as the drone flies autonomously around the bridge, it processes the data it gathers and provides feedback and suggestions to the inspector. After landing, the drone's onboard software uses the data to build a 3D model of the bridge that inspectors can use to accurately visualize the structure. The researchers predict ARIA will be lead to other robotic infrastructure inspection technologies. "The unique aspect of this team is that it combines the robotics perspective, the vision-based data processing perspective, and the civil engineering condition assessment and structural analysis perspectives," Akinci says.


A Vision for Micro and Macro Location Aware Services
CCC Blog (12/14/16) Helen Wright

University of Virginia doctoral researcher Abdeltawab Hendawi and colleagues' proposal for micro and macro location-aware services was honored at the Computing Community Consortium-sponsored Blue Sky Ideas Track Competition at the ACM International Conference on Advances in Geographic Information Systems (SIGSPATIAL 2016) conference in San Francisco, CA. The researchers envision their proposal as appropriate for the era of smart cities and the Internet of Things. They say these services should anticipate and consider users' needs, as well as upcoming environmental conditions in both the future and the present. Moreover, smart location-aware services will be able to work with each other harmoniously, so when a conflict or a disruption arises, an intelligent integration mechanism will be used to address and ameliorate the consequences. Hendawi's team says this can be reflected in a full-fledged system of holistic, smart location-aware services. Steps the team has followed to realize this vision include developing a smart personalized routing system that combines multiple preferences such as safety, attractions, travel time, distance, and trip start time, and then delivers a personalized optimal route. The researchers also developed solutions to predict future destinations for many users. Hendawi expects this could lead to a global optimized service that improves the overall quality of location-aware services.


Quake-Detection App Captured Nearly 400 Temblors Worldwide
Berkeley News (12/14/16) Robert Sanders

The MyShake app has recorded nearly 400 earthquakes since it was made available for download in February. The app harnesses a smartphone's motion detectors to measure earthquake ground motion, then sends the data back to the Berkeley Seismological Laboratory for analysis. The smartphone accelerometers and the density of phones in many places are sufficient to provide data quickly enough for early warning. University of California, Berkeley professor Richard Allen, graduate student Qingkai Kong, and a team at the Silicon Valley Innovation Center in Mountain View, CA, developed the MyShake app and the algorithm behind it. The team believes the app's performance shows it can complement traditional seismic networks. Nearly 220,000 people have downloaded the app so far, and an average of between 8,000 and 10,000 phones are always turned on and ready to respond. An updated version of the app, which provides an option for push notifications of recent quakes within a distance determined by the user, was made available for download this week. "The notifications will not be fast initially--not fast enough for early warning--but it puts into place the technology to deliver the alerts and we can then work toward making them faster and faster as we improve our real-time detection system within MyShake," Allen says.


U.S. Moves Exascale Goalpost, Targets 2021 Delivery
HPC Wire (12/12/16) Tiffany Trader

Exascale Computing Project (ECP) director Paul Messina recently confirmed the U.S. timeline for achieving exascale supercomputer performance will be accelerated by one year. The update has the U.S. fielding at least two exascale systems in the next seven years, with one high-performance computer (HPC) aiming for a 2022 delivery and a 2023 acceptance. However, the other system is now targeting delivery in 2021 and acceptance in 2022, while the U.S. Department of Energy (DoE) has clarified the first HPC will use a novel architecture. In addition, the original 10-year timeline provided three years after delivery of the machines for applications and software tuning, but now those operations will start during the final year of the ECP and continue after that. The updated timeline will increase the project's cost and power requirements. "The benefit to offsetting a year is that we don't have to deal with two systems simultaneously," Messina says. For the novel architecture, the DoE wants a larger-scale implementation, which Messina says could come from a mix of ideas--including sensors and interconnects--proposed by the PathForward project. "There are emerging processor architectures that seem quite promising--one would expect that there will be some fairly large systems with those processor architectures available within a couple years," Messina says.


Researchers' Discovery of New Verbal Working Memory Architecture Has Implications for Artificial Intelligence
New York University (12/12/16) James Devitt

New York University (NYU) researchers have found artificial intelligence (AI) systems, such as speech translation tools, need multiple working memory networks. Their research focuses on working memory critical for thinking, planning, and creative reasoning, and involves holding in mind and transforming the information necessary for speech and language. In examining human patients undergoing brain monitoring to treat drug-resistant epilepsy, the researchers found the human neural structure is more complex than previously understood. Processing information in working memory involves two different networks in the brain rather than one. The team says one network encoded the rule the patients were using to guide the utterances they made, while the process of using the rule to transform the sounds into speech was handled by a second, transformation network. Current AI systems that replicate human speech typically assume computations involved in verbal working memory are performed by a single neural network. The finding suggests a new way to make machines more intelligent, says NYU professor Bijan Pesaran.


International Students Studying STEM in U.S. Jumps 10 Percent
Campus Technology (12/12/16) Richard Chang

The number of international students studying science, technology, engineering, and mathematics (STEM) in the U.S. grew 10.1 percent from November 2015 to November 2016, according to a new U.S. Immigration and Customs Enforcement (ICE) study. ICE's most recent SEVIS data shows there are 1.23 million international students with F (academic) or M (vocational) status studying at 8,697 U.S. institutions. SEVIS is a Web-based system that the U.S. Department of Homeland Security uses to maintain information on Student and Exchange Visitor Program (SEVP)-certified schools. The 1.23-million total represents a 2.9-percent increase over November 2015, and almost 42 percent of those students were studying STEM subjects. Eighty-seven percent of the international STEM students came from Asia, while only 4 percent of STEM students came from Africa. Meanwhile, North American and European STEM students accounted for 3 percent each, and 2 percent of international STEM students were from South America. Just 0.25 percent of international STEM students came from Australia and the Pacific Islands, but that figure is up 15 percent from November 2015 to November 2016.


Researchers Develop New Approach for Better Big Data Prediction
Columbia University (12/12/2016) Jessica Guenzel

Researchers at Columbia, Princeton, and Harvard universities have created a theoretical framework to analyze big data that more accurately predicts outcomes in a variety of applications. Current approaches to prediction generally involve using a significance-based criterion for evaluating variables and analyzing variables and models simultaneously using cross-validation or independent test data. However, significant variables may not necessarily be predictive, and good predictors may not appear to be statistically significant. In a recent study, the researchers introduced the Influence score (I-score) to better measure a variable's ability to predict. The I-score can be used to compute a measure that asymptotically approaches predictivity and differentiate between noisy and predictive variables. "Using the I-score prediction framework allows us to define a novel measure of predictivity based on observed data, which in turn enables assessing variable sets for, preferably high, predictivity," says Columbia professor Shaw-Hwa Lo. Unlike traditional approaches, the I-score method does not rely heavily on cross-validation data or testing data to evaluate the predictors. The research team says the new prediction framework could be useful in formulating predictions about diseases and medicine, social science phenomena, and financial markets.


Google Open Sources Data Visualization Tool for Machine Learning
eWeek (12/12/16) Jaikumar Vijayan

Google has open-sourced its Embedding Projector, a Web application that gives developers a way to visualize data that is being used to train their machine-learning systems. "With the widespread adoption of [machine-learning] systems, it is increasingly important for research scientists to be able to explore how the data is being interpreted by the models," says Google engineer Daniel Smilkov. The interactive visualization tool provides an efficient way to analyze machine-learning models that rely on "embeddings," which are mathematical vector representations of different facets of data such as images, words, and numerals. Developers will be able to navigate through three-dimensional and two-dimensional views of their data and ensure an embedding preserves the original meaning of the data. Embedding Projector is part of TensorFlow, the machine-learning technology behind some of Google's popular services. The tool is now available to anyone as a standalone Web application or integrated into the TensorFlow platform. The goal is to give developers a way to explore and refine machine-learning applications.


Machine Learning Lets Computer Create Melodies to Fit Any Lyrics
New Scientist (12/09/16)

San Jose State University (SJSU) researchers have developed ALYSIA, a machine-learning system that turns poetry into a song by composing a pop music score to suit the lyrics it is given. The system processes short lines of text and associates each syllable with a musical note, then chooses the pairing based on certain features, such as the syllable's position in the word and how it will fit with the previous five notes. ALYSIA can write whole accompanying scores or provide musicians with a variety of melody options for each segment of lyrics. ALYSIA uses two models, one focused on rhythm and the other on pitch. Both of the models were trained on the melody line and lyrics of 24 different pop songs. University of California, Santa Cruz professor David Cope notes ALYSIA is unusual in taking lyrics as its starting point, and he says it is impressive that the system manages to match the meter of the melody with the lyrics. The researchers want to create a system that can compose all of the aspects of a song on its own. "We want to design a program able to generate the music, the lyrics, and ideally even the production and the singing by itself," says SJSU professor Margareta Ackerman.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]