Association for Computing Machinery
Welcome to the March 25, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).


IU to Offer One of the First Data Science Courses to Use Real Clinical Trial Data
IU Newsroom (03/24/16)

Indiana University (IU) will partner with Eli Lilly to offer one of the first data sciences courses in the U.S. to use real-world clinical trial data. "Our goal is for students to gain a better understanding of the overall drug development process, and specifically the human clinical trial phases," says Eli Lilly clinical data associate Sara Bigelow. "This includes gaining knowledge on the data side of the process--where the data comes from, where it goes, how it's used, and why it's so important not only to clinical trial research but also the pharmaceutical industry as a whole. Another key takeaway will be awareness about the privacy process involved in working with patient data." The IU course will be offered as a four-week summer class starting May 2 via the data science master's degree program at the IU School of Informatics and Computing. The trial data will employ anonymized information collected from human subjects during the safe testing of potential new pharmaceuticals. Students enrolled in the new course will have an opportunity to gain hands-on instruction in understanding, refining, and analyzing real-world data of the type used by drug companies in making major business decisions on drug development.

Physicists Unleash AI to Devise Unthinkable Experiments
Scientific American (03/22/16) Charles Q. Choi

University of Vienna physicists led by Mario Krenn have developed a computer program that can automatically design new quantum experiments that were previously unthinkable. The basis of the MELVIN program's creation was the difficulty Krenn and colleagues encountered in attempting to generate a form of entanglement where three entities shared three properties known as Greenberger-Horne-Zeilinger (GHZ) states. MELVIN uses common building blocks of quantum experiments such as mirrors and holograms and virtually configures them to find non-intuitive arrangements that meet whatever goals researchers desire, such as a specific quantum state. Upon finding a working result, the software automatically simplifies the design and reports it to the researchers. "I started the program in the evening and by the next morning, after a few hundred thousand different trials, it found one correct solution [to the GHZ states problem]," Krenn reports. He says MELVIN incorporates experiential learning, so that "if it found one good solution, it stores the good solution and can use it for follow-up experiments." Krenn's team also determined MELVIN could take sets of entangled particles and revise them so they switched properties with one another in a cyclical manner, which could be helpful in nearly hack-proof quantum cryptography. Moreover, MELVIN arrived at unexpected solutions the researchers were unlikely to have thought up.

Machines Just Got Better at Lip Reading
IEEE Spectrum (03/24/16) Prachi Patel

University of East Anglia researchers Helen Bear and Richard Harvey have developed a new lip-reading algorithm that enhances a computer's ability to distinguish between sounds that all look similar on lips. Their machine-learning algorithm more accurately maps a viseme--the shape a mouth assumes--to one specific phoneme. The algorithm is trained in two steps, by first learning to map a viseme to the multiple phonemes it can represent. The second step involves duplication of the viseme, with each copy trained on just one of the sounds. The data to train the algorithm was culled from audio and video recordings of 12 speakers speaking 200 sentences. Bear employed computer-vision algorithms that extracted the shapes of their mouths, and then labeled the extracted data with proper visemes and the audio data with phonemes and fed it to the training algorithm. The algorithm can identify sounds correctly 25 percent of the time, which Bear says improves on earlier techniques. It also recognizes words 5 percent better than on average for all speakers, which Bear says is a substantial boost given the low accuracy of speech-recognition systems developed thus far.

Crowd Control? Baidu Has an Algorithm for That
The Wall Street Journal (03/23/16) Alyssa Abkowitz

Baidu's Big Data Lab has devised an algorithm that can predict crowd formation, which researchers say could be used to help warn authorities and individuals of unusually large crowds that threaten public safety. The algorithm correlates data from Baidu Map route searches with the crowd density of the places people search for to anticipate crowd formations at a certain place and time. "Our algorithm is able to use crowd data from Baidu maps to predict how many people will be [at a certain location] in the next two hours," reports Baidu researcher Wu Haishan. The traditional approach to predict crowd formations is performed with video sensors and computer-vision technology, and Wu says there are no privacy complications with the algorithm because its use of aggregated data ensures individual users' anonymity. He also says the research or algorithm could be shared with product development in the future or easily made available to local governments, authorities, and venue operators. In addition, it could be made accessible to general users in Baidu Maps, according to Wu. The algorithm illustrates how Baidu is employing its big data to analyze social and economic factors.

The Long-Awaited Promise of a Programmable Quantum Computer
Technology Review (03/23/16)

University of Maryland in College Park physicists led by Shantanu Debnath have unveiled a five-quantum-bit (qubit) computer module that can be programmed to run any quantum algorithm, and linked to other modules to perform powerful quantum computations involving large numbers of qubits. The new device is based on research conducted over the last 20 years on trapped ion quantum computers. It is comprised of five ytterbium ions lined up and corralled in an electromagnetic field. The electronic state of each ion can be controlled by striking it with a laser, which enables each ion to store a bit of quantum information. The charged ions exert a force on each other and vibrate at controllable frequencies that also entangle the ions. By controlling these interactions, physicists can execute quantum logic operations, which can form the basis of quantum algorithms. Debnath's team has constructed a self-contained module that can address each ion with a laser and read out the results of the qubit interaction. "As examples, we implement the Deutsch-Jozsa, Bernstein-Vazirani, and quantum Fourier transform algorithms," they note. The researchers also report their module has scalability, and several five-qubit modules can be connected into a much more powerful quantum computer.

A Japanese AI Almost Won a Literary Prize
Motherboard (03/24/16) Emiko Jozuka

Hitoshi Matsubara from Japan's Future University announced this week his research team's short-form novel, co-authored with an artificial intelligence (AI), successfully passed the initial screening of a domestic literary competition. The team contributed a framework for the novel by selecting the gender of characters and outlining the plot, but the AI program was designed to combine the elements together as it chose specific sentences and words that had been pre-prepared by the researchers. The novel was entered to compete for the Hoshi Shinichi Literary Award, which accepts creative works from both humans and machines. Science fiction writer Satoshi Hase reports Matsubara's novel was well structured, but the characters were not fully developed. "So far, AI programs have often been used to solve problems that have answers, such as Go and shogi," Matsubara says. "In the future, I'd like to expand AI's potential [so it resembles] human creativity."

Voice-Controlled Calorie Counter
MIT News (03/24/16) Larry Hardesty

Researchers from the Spoken Language Systems Group at the Massachusetts Institute of Technology's (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a Web-based prototype of a speech-controlled nutrition-logging system at this week's International Conference on Acoustics, Speech, and Signal Processing. The system lets users verbally describe the contents of a meal, and then parses the description and automatically retrieves the relevant nutritional data from an online U.S. Department of Agriculture (USDA) database. The data is displayed along with images of the corresponding foods and pull-down menus that enable users to tweak their descriptions, although those changes also can be made verbally. The CSAIL researchers' main focus was on identifying words' functional role and reconciling user phrasing with the entries in the USDA database. The first challenge was tackled via machine learning, using the Amazon Mechanical Turk crowdsourcing platform to assemble about 10,000 labeled meal descriptions and then applying algorithms to extract patterns in the syntactic relationships between words that identify their functional roles. For the second problem, the researchers used the open source Freebase database, which includes synonyms for common food items, to translate between users' descriptions and the labels in the USDA database. "A spoken-language system that you can use with your phone would allow people to log food wherever they are eating it, with less work," reports Tufts University's Susan Roberts.

Survey Finds Most Coders Are Self-Taught
Network World (03/21/16) Andy Patrizio

Most programmers are self-educated and have received little formal training, according to a new Stack Overflow survey of 50,000 coders. A majority--69.1 percent--were self-taught, while 43.9 percent received on-the-job training and 34.8 percent earned a bachelor's degree in computer science. The best-paid programmers where those with a Ph.D., yet they comprised only 2.1 percent of survey respondents. Full-stack Web developers accounted for 28 percent of respondents, and back-end Web developers constituted 12.2 percent. Coders of mobile apps made up 8.3 percent and desktop app developers totaled 6.9 percent, while only 2.2 percent of developers focused on DevOps. Developers under 30 years old made up 59 percent of all programmers, and 92.8 percent of respondents were male versus 5.8 percent female. Oddly, the largest group of female developers was aged 20 to 24, at 7.2 percent, and then it declined to the 35- to 39-year-old group at 4.2 percent, then rose again. Coders older than 60 comprised the second-largest female developer group at 7.1 percent.

A Computer With a Great Eye Is About to Transform Botany
Wired (03/17/16) Margaret Rhodes

Pennsylvania State University paleobotanist Peter Wilf and colleagues have developed new software for identifying families of leaves. Botanists currently rely on a manual method of identification that has not changed much since it was developed in the 1800s. The lead architecture method is a painstaking process that involves using an unambiguous and standard set of terms from a big reference book to describe leaf form and venation, and correctly identifying a single leaf's taxonomy can take two hours. The new software, which Wilf developed with Brown University neuroscientist Thomas Serre, combines computer vision and machine-learning algorithms to identify families of leaves in only milliseconds. Wilf views the tool as an assistant, considering its accuracy rate for identifying patterns in leaves and linking them to the families they potentially evolved from is 72 percent. Wilf and Serre have fed the program 7,597 images of leaves that have been chemically bleached and then stained. Once the software processes these ghost images, it creates a heat map on top of them, with red dots pointing out the importance of different codebook elements, or tiny images illustrating some of the 50 different leaf characteristics. Together, the red dots highlight areas relevant to the family to which the leaf may belong. Wilf wants to feed the software tens of thousands of images of unidentified, fossilized plants.

Replacement for Silicon Devices Looms Big With ORNL Discovery
Oak Ridge National Laboratory (03/17/16)

The U.S. Department of Energy's Oak Ridge National Laboratory (ORNL) has developed a processing technique that moves two-dimensional (2D) electronic devices a step closer to delivering on their promise of low power, high efficiency, and mechanical flexibility. The ORNL team used a helium ion microscope, an atomic-scale "sandblaster," on a layered ferroelectric surface of a bulk copper indium thiophosphate. The result was a material with tailored properties potentially useful for phones, photovoltaics, flexible electronics, and screens. The helium ion microscope is typically used to cut and shape matter, but the team showed it also can be used to control ferroelectric domain distribution, enhance conductivity, and grow nanostructures. The development could establish a path to replacing silicon as the choice for semiconductors in some applications. Lower power consumption could be as significant as improving battery performance. "Everyone is looking for the next material--the thing that will replace silicon for transistors," says Alex Belianinov with ORNL's Center for Nanophase Materials Sciences Division. "2D devices stand out as having low power consumption and being easier and less expensive to fabricate without requiring harsh chemicals that are potentially harmful to the environment."

Wrangler Supercomputer Speeds Through Big Data
University of Texas at Austin (03/17/16)

A new kind of supercomputer fills a gap in the supercomputing resources of the Extreme Science and Engineering Discovery Environment, which is supported by the U.S. National Science Foundation (NSF). In 2013, NSF awarded the Texas Advanced Computing Center (TACC) at the University of Texas at Austin and academic partners $11.2 million to build and operate Wrangler, a supercomputer that would handle data-intensive high-performance computing. Wrangler is designed to work closely with the Stampede supercomputer, the flagship of TACC's Data Intensive Computing group and the 10th most powerful in the world, according to the bi-annual Top500 list. Group leader Niall Gaffney says Wrangler offers a lot of what is good about systems such as Stampede, but new things were added such as a very large flash storage system, a very large distributed spinning disc storage system, and high-speed network access. At the heart of the system are 600 terabytes of flash memory shared via peripheral component interconnect across Wrangler's more than 3,000 Haswell compute cores. "All parts of the system can access the same storage," Gaffney says. "They can work in parallel together on the data that are stored inside this high-speed storage system to get larger results they couldn't get otherwise."

Beyond Today's Crowdsourced Science to Tomorrow's Citizen Science Cyborgs
The Conversation (03/17/16) Kevin Schawinski

Millions of citizen scientists are pooling their time and brainpower to tackle big scientific problems, but the way in which they contribute to these efforts may be about to change, writes Swiss Federal Institute of Technology professor Kevin Schawinski. He believes machines are starting to become competitive with humans in terms of analyzing images, but they still need our help. "Envision a future where a smart system for analyzing large data sets diverts some small percentage of the data to human citizen scientists to help train the machines," Schawinski says. "The machines then go through the data, occasionally spinning off some more objects to the humans to improve machine performance as time goes on. If the machines then encounter something odd or unexpected, they pass it on to the citizen scientists for evaluation." Schawinski says humans excel at identifying the strange things out there that we do not know about yet, and could help machines in this area as well. "If a machine got confused by something, or just wanted some extra feedback, it could kick the object back to a human for help, and then update itself to deal with similar things in the future," he notes. Schawinski thinks many other fields of science could benefit from this approach.

PolyU Develops Integrated iWheelchair System
Hong Kong Polytechnic University (03/16/16)

A team from Hong Kong Polytechnic University has developed an intelligent system called the iWheelchair that promises to make life easier for users and caregivers. A tablet computer serves as the central operations platform, enabling users to control any home devices connected to the system, such as an electric bed, TV, and electric curtains, via simple touchscreen commands. Users with impairment of hand functions would use a fabric electronic switch that can convert thumb movements into touchscreen commands. The system also integrates safety, health, and hygiene monitoring with automated alert functions. A wheelchair sensor measures vital health signals and sends readings to the computer for instant display and future record, while a fall monitoring function can activate an alarm through the tablet when the user or the wheelchair falls. Fabric sensors monitor posture and alert users to prolonged inactivity that can lead to circulatory conditions such as bedsores, and a smart diaper will buzz when changing is needed. An optional setting is available for sending short-message-service and email alerts to caregivers and family members.

Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.

To submit feedback about ACM TechNews, contact:
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe