Association for Computing Machinery
Welcome to the November 13, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Bringing iPhone-Style Medical Research to the Android World
The New York Times (11/12/15) Steve Lohr

Apple's introduction of ResearchKit software for its iPhone platform in March prompted Weill Cornell Medical College professor Deborah Estrin to launch an effort to embed similar capability within Google's Android platform. She says the newly-announced ResearchStack project seeks to introduce a modular, open source software framework similar to ResearchKit. Estrin says ResearchStack is designed to seamlessly interoperate with ResearchKit-using initiatives. "Researchers can create a study that is independent of what smartphone is used, and they won't have to start from scratch," she says. Among the current projects ResearchStack will soon accommodate is Mole Mapper, an application for a melanoma study from the Oregon Health and Science University. The study entails people capturing photos of moles via smartphones to monitor their growth, with the goal of devising detection algorithms and helping people manage the health of their skin. Developing apps for individuals as well as scientists is essential for successful large-scale studies, according to Estrin. She says the personal health management enabled by these apps will help drive "the growing data-sharing movement," encouraging millions to "contribute to big data-derived discovery and understanding" in medicine.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Streamlining Mobile Image Processing
MIT News (11/13/15) Larry Hardesty

Researchers from the Massachusetts Institute of Technology, Stanford University, and Adobe Systems unveiled a system that streamlines mobile image processing last week at the ACM SIGGRAPH Asia conference in Kobe, Japan. The system transmits a highly compressed version of an image to a server, which sends back an even smaller file containing basic instructions for modifying the original image. The system can work with any alteration to the style of an image, sending it as a low-quality JPEG to save bandwidth during uploading. The introduction of high-frequency noise into the image boosts resolution and prevents the system from relying too much on color consistency in areas of the image when determining how to define its image transformations. The next step involves manipulating the image as intended, followed by its breakdown into segments via a machine-learning algorithm to describe the effects of the manipulation based on fundamental parameters mainly related to variation in pixel luminance. Extra computation is needed to apply changes to the original image, and experiments yielded between 50 percent and 85 percent energy savings, while the process was 50 percent to 70 percent faster than downloading high-resolution files. The system also cut the bandwidth consumed by server-based image processing by up to 98.5 percent.


U.S. Government Lab Dabbles in New Computer Designs
IDG News Service (11/11/15) Agam Shah

The Los Alamos National Laboratory (LANL) is seeking to replace conventional computer systems by developing and obtaining new computer designs, including the D-Wave 2X quantum computer. Its acquisition aligns with the facility's goal of understanding new forms of computing and their relevance to different applications, according to LANL's John Sarrao. Lab researchers will use the D-Wave 2X to explore quantum computing and software applications, and the system must be programmed for specific tasks because it is not a general-purpose machine. Another focus of the lab is neuromorphic chips inspired by the human brain's operations. Sarrao says LANL's initiatives to work out new computer designs will advance materials and physics research and help further develop supercomputing. He also notes the lab will still depend on its massive supercomputers for critical scientific research, but the research into new computers is growing in importance as the limits of Moore's Law come closer. Sarrao says quantum computers could deliver a new model to replace systems such as Trinity, an LANL supercomputer slated to become operational in 2016.


Needed: More Women in Data Science
Stanford Report (11/12/15) Dan Stober

Stanford University recently hosted the inaugural Women in Data Science conference, which brought together about 400 women to discuss data science and promote greater gender diversity in the field. Attendees included students and professionals from the private sector, academia, and national laboratories. Speakers included Stanford professor Fei-Fei Li, who described her team's research into computer vision, which includes building a database of 15 million photos to train computer-vision systems. "This is a dream for us, to organize a conference like this," says Margot Gerritsen, director of Stanford's Institute for Computational & Mathematical Engineering, which was the main sponsor of the event. A major theme of the conference was the need to bring more women into the data science field. The demand for data scientists currently is extremely high, and "not tapping into 50 percent of our population of our talent would of course be a very silly thing to do," Gerritsen says. "When there is a difficult challenge to address, and our world is full of difficult challenges, we need a diversity of thought, a diversity of approaches, a diversity of styles to get to the solutions--and that's why we need diverse teams," says Persis Drell, dean of Stanford's School of Engineering.


Few Students Meet ACT's New Mark for College Readiness in STEM Fields
Education Week (11/11/15) Catherine Gewertz

Eighty percent of high school students who took the American College Testing (ACT) national college admission exam are not academically prepared for first-year college courses they will likely have to take if they choose a science, technology, engineering, or math (STEM) major, according to the ACT's third "Condition of STEM" report. The report is the first to analyze student performance against a new "STEM benchmark" recently added to the test. Underlying the STEM benchmark is recent research suggesting higher-caliber performance in high school is required for good results in STEM-related college courses. ACT officials say the STEM college-readiness benchmark is very high, so "rates of attainment are extremely low," with only one out of five ACT-tested students in the class of 2015 meeting that mark. Officials say the results of the study reflect the pressing need for K-12 educators to fortify students' STEM skills, since they are a key source of fast-growing, well-paid career fields and an important component of U.S. competitiveness. The study also found student interest in STEM has climbed 1 percent in the last four years. Moreover, in the last four years the segment of students interested in computer science and math majors, and in engineering and technology majors, has risen 2 percent, while the number of students interested in medical and health majors has fallen 3 percent.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


New Algorithm Cracks Graph Problem
ScienceNews (11/11/15) Andrew Grant

The graph isomorphism problem can now be efficiently solved thanks to a new algorithm devised by computer scientist Laszlo Babai, who unveiled the algorithm Tuesday at a University of Chicago seminar. The problem involves a computer determining whether two separate sets of interconnected points, or graphs, are connected in the same way even if the graphs appear very dissimilar. Babai says his breakthrough eliminates the possibility that extremely complex graphs would make a solution to the problem impossible. He says the algorithm assesses the most complicated graphs in quasipolynomial time, in which the solving time increases along with the number of graph nodes, but at a more gradual pace. Stanford University researcher Ryan Williams thinks Babai's milestone may be the most significant theoretical computer science advance in more than 10 years. It could help scientists address the mystery of whether every problem that can be easily confirmed can be easily solved. Williams says Babai's work could improve the understanding of the border between polynomial time and nondeterministic polynomial time-complete problems. Potential innovations outside of computer science this development could yield include a way for chemists to ascertain whether complex molecules have the same bonding structure.


Star Wars Characters Will Now Teach Your Kids to Code
Wired (11/09/15) Issie Lapowsky

In an effort to bring coding to an ever-larger group of kids and students, Code.org partnered with Lucasfilm as part of its annual Hour of Code event. Code.org this week launched a free online tutorial featuring characters from the upcoming film "Star Wars: The Force Awakens" who prompt kids to learn to code and build their own games. In the tutorial, Princess Leia and Rey, the new film's female lead, guide students through various lessons and help them design games using other Star Wars characters, such as R2-D2 and C-3PO. Code.org founder Hadi Partovi says the decision to have female characters anchor the tutorial is part of Code.org's efforts to bring more girls and minorities into computer science. "One of the most important things for us is to make computer science more popular, to broaden participation, and get students of all ages and all backgrounds to give it a shot," Partovi says. The organization already has had considerable success, as 40 percent of the 5 million students currently registered on Code.org are girls and another 40 percent are black or Hispanic. Last year's tutorial was completed more than 13 million times, and Code.org expects nearly four times as many people to take part this year.


Microsoft Machine Learning Advances to Sensing Emotions
eWeek (11/11/15) Pedro Hernandez

Microsoft this week announced it was making major additions to its Project Oxford collection of machine-learning application programming interfaces (APIs). Foremost among the additions is the Emotion API, which can be used by systems to "recognize eight core emotional states--anger, contempt, fear, disgust, happiness, neutral, sadness, or surprise--based on universal facial expressions that reflect those feelings," says Microsoft Research's Allison Linn. Microsoft's Ryan Galgon envisions several commercial applications for the technology, including sensors that can help marketers gauge consumers' emotional reactions to products, and consumer tools such as messaging apps that offer up different options based on the emotions it reads from a photo. Another new Project Oxford API is a spelling checker that can recognize slang terms, brand names, and common but difficult-to-spot errors such as confusing "for" and "four." Linn says the spelling checker also will be regularly updated to add new brand names and expressions. The technology behind Microsoft's How-Old.net age and gender identification tool also is being updated to give it greater capabilities. Finally, Linn announced a trio of upcoming Project Oxford betas focused on video analysis and editing, voice identification, and specialized voice recognition in challenging environments such as noisy rooms.


Robot Toddler Learns to Stand by "Imagining" How to Do It
Technology Review (11/06/15) Will Knight

The ability of robots to carry out tasks and move about their environment is increasingly impressive, but they still face major challenges. For example, during the recent U.S. Defense Advanced Research Projects Agency Robotics Challenge, several of the competing robots toppled over on unstable terrain or stumbled when asked to execute complex maneuvers. "Just even a little variability beyond what [the robot] was designed for makes it really hard to make it work," says University of California, Berkeley professor Pieter Abbeel. He and his team are working to address this issue of variability with a robot called Darwin. Using a high-level deep-learning network, Abbeel and his team are giving Darwin the ability to "imagine" its motions and how they might fail before making them. The group has previously used deep-learning networks to teach robots how to complete tasks such as fitting a shaped block into the appropriate hole by attempting the task multiple times. However, this technique is not suited for teaching a robot to perform an action such as standing up because of the wear and tear it would put on the robot's joints. Instead, Darwin uses a two-tiered network--one that simulates the process and another that practices them physically once it thinks it has a good solution.


Google Offers Free Software in Bid to Gain an Edge in Machine Learning
The New York Times (11/09/15) Steve Lohr

Google on Monday announced the open-sourcing of its TensorFlow system's software to outside developers as part of a campaign to gain a competitive edge in the machine-learning field. Google says TensorFlow offers up to five times the speed of its old DistBelief system in building and training machine-learning models. "It may be useful wherever researchers are trying to make sense of very complex data--everything from protein folding to crunching astronomy data," Google says. Stanford University professor Christopher Manning says TensorFlow offers an improved, faster toolkit for deep learning, which he thinks will be widely adopted by scientists and students in both academia and industry. Google claims TensorFlow can operate on a single smartphone or across numerous computers in data centers. The first iteration will run on one machine, while its expansion to many computers will move forward in the coming months, according to Google. "The software itself is open source, but if [TensorFlow] is successful, it will feed Google's money-making machine," says Massachusetts Institute of Technology professor Michael A. Cusumano. "There are so many applications of machine learning to the bread and butter of what Google does."
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Wi-FM Listens to FM Signals to Determine Best Times to Send and Receive Data
Northwestern University Newscenter (11/09/15) Amanda Morris

Northwestern University researchers have developed Wi-FM, a new technique that enables existing wireless networks to communicate through ambient FM radio signals. The researchers decided to use FM radio signals because most smartphones and mobile devices are already manufactured with an embedded FM chip. In addition, FM radio signals can pass through walls and buildings without being obstructed, making them very reliable. Minor upgrades to software would enable devices to take advantage of Wi-FM, according to the researchers. Wi-FM prevents a user's network data from interfering with a neighbor's data. Normally, when network data are sent at the same time, they bump into each other, but Wi-FM works by enabling the device to "listen" to the network and select the quietest time slots according to FM radio signals. "It can send its data right away without running into someone else or spending any time backing off," says Northwestern doctoral student Marcel Flores. Wi-FM identifies the usage patterns of other networks in order to detect times with the lightest and heaviest traffic, which helps to harmonize Wi-Fi signals that are transmitting on the same channel, notes Northwestern professor Aleksandar Kuzmanovic.


National Labs Collaborate to Shape Development of Next-Generation Supercomputers
Los Alamos National Laboratory News (11/10/15) Kevin Roark

Los Alamos (LANL), Lawrence Berkeley (Berkeley Lab), and Sandia national laboratories have formed the Alliance for Application Performance at Extreme Scale (APEX), an effort focusing on the design, acquisition, and deployment of future advanced high-performance computing systems. "The supercomputers of the future will not only be extremely powerful, but will also need to be much more energy efficient and resilient than current designs," and APEX will help in achieving those goals, says LANL's Gary Grider. APEX will utilize two new advanced computing systems: "Crossroads" for the New Mexico Alliance for Computing at Extreme Scale at Los Alamos and Sandia, and "NERSC-9" for the National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab. Both platforms will focus on providing increased application performance and capability, as well as the deployment of advanced technology concepts. The partnership will work to meet the needs of the Advanced Simulation and Computing Program and the Advanced Scientific Computing Research Program. Both programs will work together to leverage investments and increase the cost-effectiveness in the acquisition of yet-to-be-developed systems. "With the proven expertise of our three laboratories combined, we are confident that APEX will develop and deploy technologies that further [the U.S. Department of Energy's] mission in science and national security," says NERSC director Sudip Dosanjh.


Get Ready for Your Digital Model
The Wall Street Journal (11/12/15) Pedro Domingos

Within 10 years, people will entrust their data to machine-learning algorithms that build personal digital models of them, writes University of Washington professor Pedro Domingos. He predicts a new kind of company will be conceived to store, safeguard, and apply such data to the construction, maintenance, and interactions of these models. Domingos says it would record a customer's every digital interaction and feed it to the model in exchange for a subscription fee. He notes all this would require on the technical side is a proxy server through which these interactions are routed and recorded. "Once a firm has your data in one place, it can create a complete model of you using one of the major machine-learning techniques: inducing rules, mimicking the way neurons in the brain learn, simulating evolution, probabilistically weighing the evidence for different hypotheses, or reasoning by analogy," Domingos says. He thinks these models could be duplicated almost infinitely to multitask, selecting the best options for the user based on accumulated behavior and preferences. "To offset organizations' data-gathering advantages, like-minded individuals will pool the data in their banks and use the models learned from that information," Domingos says. He predicts cyberspace will evolve into "a vast parallel world that selects only the most promising things to try out in the real one--the new, global subconscious of the human race."


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe