Welcome to the December 12, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.
ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
Cray Sets Deep Learning Milestone
EE Times (12/09/16) R. Colin Johnson
Cray last week reported achieving a deep-learning milestone in partnership with the Swiss National Supercomputing Center (CSCS), using Microsoft's neural-network software on an XC50 supercomputer with 1,000 NVIDIA Tesla P100 graphic-processing units (GPUs). Cray says it can execute deep-learning operations that previously took days in a matter of hours. Although most users of the center's supercomputer run conventional high-performance computing (HPC) tasks, increasing numbers of their workloads perform artificial intelligence (AI) tasks. In response, CSCS opted to embed Microsoft's open source Cognitive Toolkit into its available programs and optimize it to leverage the NVIDIA GPU accelerators on the Cray XC50. "We hope to encourage the development of truly scalable algorithms for deep learning on supercomputing systems," says CSCS director Thomas Schulthess, recipient of the ACM Gordon Bell Prize in 2008 and 2009. "This is not just for AI, though. We are convinced that scientific computing in general will benefit from a convergence of HPC and data science on supercomputing systems." The GPUs originally were engineered to conduct graphics operations in parallel by splitting a display into adjacent areas, each with the same graphic task performed on a different GPU. However, since the emergence of multi-core parallel processing, NVIDIA and its users have tweaked the GPUs to run many parallel tasks.
Precisely Tuned Data-Intensive Algorithms Ascend the Graph 500
Inside HPC (12/11/16)
Khaled Ibrahim at the U.S. Department of Energy Berkeley Lab's Computational Research Division contributed two top-10 entries on the latest Graph 500 list, which ranks the performance of systems running emerging data-intensive workloads with graph analytics algorithms. Ibrahim says these workloads are often the most difficult to scale on high-performance computing systems. However, scaling up their performance can reduce the computational "expense," or the amount of computing time needed to solve a problem. "The goal is to get the best performance possible on a specific machine and we do this by creating the best algorithm to solve a specific problem on that machine," Ibrahim says. "On the algorithmic side, the challenge that graph analytic algorithms pose is that the data movements are naturally irregular and fine-grained." Ibrahim's algorithms rely on "precision runtime," which concentrates on improving the runtime element of critical computation kernels instead of the entire app. Using a segment of the Mira supercomputer at Argonne National Laboratory, Ibrahim realized 3,556 giga-traversed edges per second (GSTEPS) using eight of Mira's 48 racks, and then achieved 2,050 GSTEPS on four racks. Ibrahim says the precision nature of his work means tailoring an app is only sensible for large-scale apps in which general performance is essential to the science being conducted.
AI Begins to Understand the 3D World
Technology Review (12/09/16) Will Knight
Artificial intelligence (AI) researchers are constructing systems that can visualize the three-dimensional (3D) world and take action, with Massachusetts Institute of Technology professor Josh Tenenbaum citing this milestone as a key trend in learning-based vision systems. "That includes seeing objects in depth and modeling whole solid objects--not just recognizing that this pattern of pixels is a dog or a chair or table," he says. Tenenbaum and colleagues have employed a popular machine-learning method called generative adversarial modeling to enable a computer to learn about the characteristics of 3D space from examples so it can produce new objects that are realistic and physically accurate. The researchers presented the work last week at the Neural Information Processing System (NIPS 2016) conference in Barcelona, Spain. Tenenbaum says 3D perception should be essential for robots designed to engage with the physical world, including self-driving automobiles. Professor Nando de Freitas at the U.K.'s University of Oxford agrees that AI cannot progress without the ability to explore the real world. "The only way to figure out physics is to interact," de Freitas says. "Just learning from pixels isn't enough."
Super-Flexible Liquid Crystal Device for Bendable and Rollable Displays
Tohoku University (11/30/16)
Researchers from Japan's Tohoku University say they have developed a super-flexible liquid crystal (LC) device, in which two ultra-thin plastic substrates are firmly bonded by polymer wall spacers. The researchers say the new organic materials should help make electronic displays and devices more flexible, increasing their portability and versatility. The team overcame challenges associated with previous efforts to create a flexible display using an organic light-emitting diode by making existing LC displays flexible with plastic substrates instead of with conventional thick glass substrates; the strategy worked because LC materials do not deteriorate even with the poor gas barrier of flexible substrates. Flexible LC displays have several advantages, such as established production methods for large-area displays. In addition, the material itself is inexpensive, can be mass-produced, and shows little quality degradation over time. The researchers developed the super-flexible LC device by bonding two ultra-thin transparent polyimide substrates together, using robust polymer wall spacers. They also demonstrated that device uniformity is kept without breaking spacers even after a roll-up test to a curvature radius of 3 millimeters for rollable and foldable applications. The researchers next want to form image pixels and soften the peripheral components of polarizing films, as well as to develop a thin light-guide sheet for backlighting.
Report Proposes Standards for Sharing Data and Code Used in Computational Studies
University of Illinois News Bureau (12/08/16)
University of Illinois (U of I) researchers have released a report recommending ways researchers, institutions, agencies, and journal publishers can collaborate to standardize the sharing of datasets and software code. "It is becoming increasingly accepted for researchers to value open data standards as an essential part of modern scholarship, but it is nearly impossible to reproduce results from original data without the authors' code," says National Academy of Sciences president Marcia McNutt. Sharing computational methods has been difficult for researchers because there are no standards or guides for reference. The new report makes seven specific recommendations, such as documenting digital objects and making them retrievable, open licensing, placing links to datasets and workflows in scientific articles, and reproducibility checks before publication in a scholarly journal. The U of I researchers say disclosing computational methods will enable other researchers to verify and reproduce results, and help build upon studies that already have been completed. "We know it's hard, but in this report we're trying to say in a very productive and positive way that data, code, and workflows need to be part of what gets disclosed as a scientific finding," says U of I professor Victoria Stodden.
AI Can Turn a Photo of Your Face Into an Uncanny 3D Model
Vocativ (12/08/16) Joshua Kopstein
Researchers at the University of Southern California have developed a learning algorithm that is capable of building a precise three-dimensional (3D) model of a person's head based on a single low-resolution photo of their face. The facial reconstructions are performed using an artificial neural network. The team trained the neural network to look at two-dimensional images and extrapolate a 3D texture map that approximates the subject's facial dimensions with a high degree of realism. The researchers demonstrated successful face reconstructions from a wide range of low-resolution input images, including those of historical figures. The team validated the realism of its results using a crowdsourced user study. The project has implications for the future of realistic avatars in virtual reality. "With virtual and augmented reality becoming the next generation platform for social interaction, compelling 3D avatars could be generated with minimal efforts and puppeteered through facial performances," the researchers say. "Within the context of cultural heritage, iconic and historical personalities could be restored to life in captivating 3D digital forms from archival photographs."
MIT News (12/07/16) Jennifer Chu
Massachusetts Institute of Technology (MIT) engineers have developed an ultra-thin, high-resolution printing process that makes use of nanoporous stamps. The team has fabricated a stamp made from forests of carbon nanotubes that is able to print electronic inks onto rigid and flexible surfaces. The stamp is spongier than rubber, about the size of a pinky fingernail, and features patterns that are much smaller than the width of a human hair. The researchers note a solution of nanoparticles, or "ink," can flow through the stamp and onto the printing surface, and this design should enable the technique to achieve much higher resolution than conventional rubber stamp printing, also known as flexography. The team says its system could print at 200 millimeters per second, continuously, which is competitive with industrial printing technology, and notes the printed patterns are highly conductive. The stamping technique potentially could be used to print affordable electronic devices that provide simple computations and interactive functions. "There is a huge need for printing of electronic devices that are extremely inexpensive but provide simple computations and interactive functions," says MIT professor John Hart. "Our new printing process is an enabling technology for high-performance, fully printed electronics, including transistors, optically functional surfaces, and ubiquitous sensors."
Stanford Researchers Say School Kids Can Do Safe and Simple Biological Experiments Over the Internet
Stanford News (12/07/16) Andrew Myers
Researchers at Stanford University have developed an Internet-enabled biological laboratory that allows students to interact and experiment with living cells in real time. The Biology Cloud Lab gives students and teachers remote-control software to operate biotic processing units, which include a microfluidic chip containing communities of microorganisms. Around each chip, four user-controlled light-emitting diodes let students apply different types of light stimuli. The chip's content is live-streamed by a webcam microscope. The Biology Cloud Lab includes software designed to make it easy to analyze and visualize the data, test hypotheses, and program the system to run hundreds of experiments automatically. Students can control the lab from any Internet-enabled computer, tablet, or smartphone. The Stanford researchers say 250 biotic processing units could be installed in a single 100-square-meter room and networked with a 1-Gbps Internet connection to serve 1 million students each year, and each experiment would cost just one cent at that scale. "Labs in most schools are stuck in the 19th century, with cookbook-style experiments," says Stanford professor Paulo Blikstein. "Biology Cloud Labs could democratize real scientific investigation and change how kids learn science."
Google-Funded Flint Water App Helps Residents Find Lead Risk, Resources
University of Michigan News (12/08/16) Margory Raymer; Nicole Casal Moore
Computer science researchers at the University of Michigan (UM) have released a mobile application and website built for Flint, MI, to help the community manage its ongoing water crisis. Mywater-Flint, funded by a $150,000 grant from Google, enables residents to access a citywide map of where lead has been found in drinking water and where work is currently being done to repair the water main infrastructure. The app also determines the likelihood the water in a home is contaminated with lead and provides step-by-step instructions for water testing. The researchers say only a third of the city's residents have had their water tested. Although all Flint homes have some level of risk, the app can predict which ones are more likely to be contaminated based on factors such as the property's age, location, value, and size. "Our website and app makes it much easier for a resident to view the water test results for their home, business, church, etc.," says lead developer Miyako Jones. "Hopefully, it will inspire those who haven't tested their home to do so." The UM team also has created additional resources for city officials, including a website that shows how many water tests have been sent to different testing centers.
RoboVote Website Helping Shape Group Decisions
Pittsburgh Post-Gazette (12/07/16) Tim Grant
Carnegie Mellon University (CMU) researchers have developed RoboVote, a website driven by artificial intelligence that taps into research on how opinions, preferences, and interests are optimally integrated into a collective decision. "We have taken what years of research have proven to be the best algorithms for making collective decisions and made them available with an interface that anyone can use," says CMU professor Ariel Procaccia. Anybody can establish a poll on RoboVote by logging in, creating a poll question, noting what alternatives are available, and specifying the participants. The user then must choose whether the poll addresses objective preferences or objective opinions, and an email is automatically sent to all participants with a link to the site where they can vote. RoboVote supervises subjective surveys intended to satisfy the most people, and objective polls designed to generate an answer as close to the truth as possible. Each voter ranks each alternative, and the algorithm compares how different voters rank the same alternatives so it can choose by discerning the subjective yet concealed utility of each voter, yielding an optimally ranked list reflecting the collective preference. For objective polls, the algorithm calculates the parameters on voter errors, producing an outcome that is as close as possible to the truth.
Quantum Computers Ditch All the Lasers for Easier Engineering
New Scientist (12/07/16) Michael Brooks
Researchers from the University of Sussex in the U.K. have replaced the millions of lasers in traditional quantum computing systems with several static magnets and a few electromagnetic fields. The new design led to a radical simplification of the engineering required to build a quantum system, which means researchers will now be able to construct a large-scale device, according to Sussex professor Winfried Hensinger. In the new system, each ion is trapped by four permanent magnets, with a controllable voltage across the trap. The entire device is bathed in a set of tuned microwave and radio-frequency electromagnetic fields. Changing the voltage shifts the ions to a different position in the magnetic field, changing their state. The researchers already have used the system to build and operate a quantum logic gate that involves entangling two ions. "It's a promising development, with good potential for scaling up," says National University of Singapore professor Manas Mukherjee, who was not involved in the project. In addition, because the device uses current technologies, there are no known obstacles to scaling up to create a useful quantum computer.
NSF Awards $324,312 to U of A to Continue as Partner in Computing Network
University of Arkansas (12/05/16)
The University of Arkansas (U of A) will continue to serve as a partner institution in the U.S. National Science Foundation's (NSF) Extreme Science and Engineering Discovery Environment (XSEDE), which provides researchers with access to a network of supercomputers and high-end visualization and data analysis resources across the U.S. NSF has awarded U of A a five-year, $324,312 grant to support the efforts of Jeff Pummill, director of strategic initiatives and user service for the Arkansas High Performance Computing Center. Pummill serves as the coordinator of the Regional Campus Champions project for XSEDE. The Regional Campus Champions program supports campus representatives as a local knowledge source about high-performance computing resources. Pummill also serves on XSEDE's User Advisory Committee and User Requirements Evaluation and Prioritization Working Group. His involvement with XSEDE helped lead to the university's acquisition of the Trestles supercomputer, which more than doubled the computational capacity at the Arkansas High Performance Computing Center. The facility supports research in about 30 academic areas across campus. "Our role in the Campus Champions network has raised the profile of the University of Arkansas among the nation's high-performance computing community," Pummill says.
Abstract News © Copyright 2016 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: email@example.com