Association for Computing Machinery
Welcome to the December 16, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


How Artificial Intelligence Could Change the Way We Watch Sports
The Washington Post (12/15/15) Dominic Basulto

Machine learning and computer vision are being combined to provide commentary on professional sporting events as they actually occur. For example, Indian researchers demonstrated that weakly supervised computers could reliably differentiate between what is happening during videos of cricket matches and then provide text-based commentary. Their research required analyzing many hours of cricket videos, placing them in categories based on already available text descriptions, breaking down the longer videos into smaller scenes to classify each video shot, and then using an algorithm to find matching commentary. Algorithms were subsequently able to accurately label a batsman's cricketing shot using visual-recognition methods for an action that sometimes lasted less than two seconds. The same research team also analyzed how a computer might be able to deconstruct the action of a tennis match, and presented a study on how machine-learning algorithms could be used to provide text commentary on tennis tournaments. Machines' ability to process views of the action from many angles simultaneously also could enhance the assessment of game plays, although for now the technology is intended to be used for training and coaching. The researchers note the video annotation enables computers to search across hundreds of hours of content for specific actions that last only seconds.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Stephen Wolfram Aims to Democratize His Software
The New York Times (12/14/15) Steve Lohr

Software pioneer and Wolfram Research founder Stephen Wolfram wants to make his technology and software philosophy available to a wider audience, including computing novices such as students and children. To this end, Wolfram is offering a version of his Wolfram Language and development tools as a free cloud service. He also has published a free online book, called "An Elementary Introduction to the Wolfram Language," as part of his goal to "make what can be done with computation as broadly accessible as possible," he says. "You want the human to have to specify as little as possible, by putting as much intelligence into the language as possible." Wolfram hopes the free cloud offering will someday enable "random kids [to] build things that only people with the fanciest tools could in the past." For researchers, Wolfram Language's automation enables them to concentrate on the scientific challenges they want to tackle, without having to perform a lot of boilerplate programming, according to Raspberry Pi CEO Eben Upton. Wolfram Language is one of five programming languages distributed with the Raspberry Pi computer, along with Python, Scratch, C++, and Java.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Quantum Computing May Be Moving Out of Science Fiction
Computerworld (12/15/15) Sharon Gaudin

True quantum computing could be realized relatively soon, according to some industry analysts. The U.S. National Aeronautics and Space Administration (NASA) and Google recently announced their D-Wave 2X quantum computer solved an optimization problem 100 million times faster than a conventional computer running a single-core processor. The research is part of their agenda to advance artificial intelligence and machine learning with the D-Wave computers. "D-Wave is at the head of the pack because they actually have a computer built," notes IDC analyst Steve Conway. "Some say it's not a real quantum computer, but Google and NASA think it's something worth testing." In another sign of progress, last week IBM announced the U.S. Intelligence Advanced Research Projects Activity program awarded it a multi-year research grant to further its push to construct a quantum system. Nevertheless, Conway expects a long time to pass before quantum computing is commercialized, while Pund-IT analyst Charles King thinks serious advances may arrive within five to 10 years, partly thanks to IBM's work. "It seems to me that with more energy and funding behind quantum projects than ever before, there's a real chance of building sustainable momentum around the technology," King says. "If that occurs, in a few years we'll be talking about systems that are the stuff of science fiction."


XPRIZE Offers $7M Purse to Unlock Mysteries of the Sea
National Geographic News (12/15/15) Andrew Kornblatt

XPRIZE has launched a three-year global competition focused on ocean mapping. The competition will use a $7-million prize purse, in addition to the title of XPRIZE Winner, to attract marine scientists, geologists, and hobbyists to drive ocean exploration. "This competition is technically challenging, but it is also very interdisciplinary," says XPRIZE senior director Jyotika Virmani. "It involves underwater robotics, it involves computer science, there is a digital imagery component to it. We expect a number of different approaches to this." Competition participants must complete a series of tasks via devices that must be launched from the shore or air, and can operate at a depth of up to 4,000 meters. Specific tasks include making a high-resolution map of the sea floor, taking high-definition images of objects, and identifying key features in a type of "treasure hunt." In addition, there is a bonus $1-million challenge for technology that can monitor chemical and biological compounds in the water column. The mapping challenge is the third in a series of five multi-million-dollar prizes the XPRIZE foundation has promised to launch by 2020 to address critical ocean challenges and inspire innovation.


Algorithm Set to Save Millions in Energy Costs
The Stack (UK) (12/14/15) Alice MacGregor

Nanyang Technological University (NTU) researchers have developed an algorithm that could help companies cut their energy bills by as much as 10 percent. The algorithm is designed to access data from existing sensors in computer chips, servers, air conditioning systems, and industrial equipment, which means organizations will not need to update their information technology hardware to benefit from the cost savings. Once combined with the sensors, the algorithm can analyze its operational data and recommend energy-saving solutions. "We can find out exactly how much cooling a room needs, whether there is an oversupply of cooling, and [how to] adjust the air flow and temperature to achieve the best balance," says NTU researcher Ted Chen. He also notes the innovation could help some companies reduce their carbon footprint and energy consumption. "With NTU's new analytic engine...large semiconductor factories and campuses could save up to [$700,000] a year without a need to change much of their hardware," Chen says.


User Error Compromises Many Encrypted Communication Apps
Technology Review (12/14/15) Rachel Metz

University of Alabama at Birmingham researchers recently conducted a study that tested how well smartphone apps that aim to ensure secure communication actually do their jobs. The apps may ask people calling or texting each other to verbally compare a short string of words they see on their screens to make sure a new communication session has not been breached by an intruder. The researchers asked participants to use a Web browser to make a call to an online server, and then they listened to a random two- or four-word sequence and determined if it matched the words they saw on the computer screen in front of them. In addition, the participants were asked to verify whether the voice they heard was the same one as the one they had heard previously reading a short story. The researchers found the participants frequently accepted calls even if they heard the wrong sequence of words, and often denied calls when the sequence was spoken correctly. Unexpectedly, the researchers found using a four-word sequence rather than a two-word sequence decreases security.


Landmark Algorithm Breaks 30-Year Impasse
Quanta Magazine (12/14/15) Erica Klarreich

Computer scientists are calling a new algorithm a breakthrough in mapping how hard computational problems are to solve. Developed by University of Chicago theoretical computer scientist Laszlo Babai, the new algorithm for the "graph isomorphism" problem is significantly more efficient than the previous best offering, which held the record for more than 30 years. The graph isomorphism question asks if two networks that look different are really the same. The problem is easy to state, but is tricky to solve because even small graphs can be made to look very different just by moving their nodes around. Announced in November, the algorithm showed highly symmetrical "Johnson graphs" were the only case in which its painting scheme did not cover. The new algorithm moves graph isomorphism much closer to P than ever before, according to Babai. He says it is quasi-polynomial, which means for a graph with n nodes, the algorithm's running time is comparable to n raised not to a constant power, but to a power that grows very slowly. Babai has submitted a paper on his work to ACM's 48th Symposium on Theory of Computing (STOC 2016), which takes June 18-21, 2016, in Cambridge, MA.


Georgia Tech Researchers Demonstrate How the Brain Can Handle So Much Data
Georgia Tech News Center (12/15/15) Tara La Bouff

Researchers at the Georgia Institute of Technology (Georgia Tech) have found people can categorize data using less than 1 percent of the original information, validating a commonly used machine-learning technique. The algorithmic theory of random projection is widely used in machine-learning applications designed to handle large amounts of diverse data. A new study from the Georgia Tech researchers is the first to examine random projection with human subjects. For the study, human subjects were presented with 150-by-150-pixel abstract images, and then shown very small "random sketches" of these images and asked to identify them. The researchers found the human subjects were able to identify the images in these random projection tests when presented with only 0.15 percent of the total data. The researchers then subjected a computational algorithm running on an artificial neural network to the same tests and found it performed as well as the human subjects. Georgia Tech professor Santosh Vempala says the team was surprised at how closely the algorithm's performance mirrored that of the human subjects. "The design of neural networks was inspired by how we think humans learn, but it's a weak inspiration," Vempala says. "To find that it matches human performance is quite a surprise."


Stunning Diversity of Gut Bacteria Uncovered by New Approach to Gene Sequencing
Stanford University (12/14/15) Jennie Dusheck

Computer scientists and geneticists at Stanford University have teamed up to create a technique for gene sequencing by using computational approaches and "long-read" DNA sequencing. Current technology examines very short snippets of DNA sequences, and assembling the snippets into an entire genome is compared to assembling a jigsaw puzzle. The new informatics approach can assemble snippets from a mass of different bacteria, which the researchers say is akin to assembling 100 jigsaw puzzles from a pile of pieces from all the 100 puzzles jumbled together. The Stanford team tested its algorithm on a standardized sample of bacteria and then on the gut contents of a human male. The approach revealed a far more diverse community of bacteria in the gut than the researchers had anticipated. The researchers say the technology should make it easier to construct the evolutionary history of strains of infectious bacteria or viruses. Stanford professor Michael Snyder compares using long reads to seeing an IMAX movie and current approaches to an old black-and-white TV.


What the World Will Be Like in 30 Years, According to the U.S. Government's Top Scientists
Tech Insider (12/10/15) Paul Szoldra

Three U.S. Defense Advanced Research Projects Agency (DARPA) researchers predict major technological advancements during the next 30 years in a new video series. Machine operation via thought control is one such advance anticipated by Justin Sanchez with DARPA's Biological Technologies Office. "Think about controlling different aspects of your home just using your brain signals, or maybe communicating with your friends and your family just using neural activity from your brain," he speculates. Among the innovations DARPA is developing in this area are neurotechnologies such as brain implants controlling prosthetic limbs. Meanwhile, Stefanie Tompkins with DARPA's Defense Sciences Office envisions the construction of extremely strong but lightweight structures. The third scientist, Pam Melroy with DARPA's Tactical Technologies Office, expects a transformation in human-machine interaction by 2045. "I think that we will begin to see a time when we're able to simply just talk or even press a button" to engage with a machine to execute tasks more intelligently, instead of using keyboards or primitive voice-recognition systems, she predicts. "Our world will be full of those kinds of examples where we can communicate directly our intent and have very complex outcomes by working together," Melroy says.


MIT's Amazing New App Lets You Program Any Object
Fast Company (12/10/15) John Brownlee

Valentin Heun from the Massachusetts Institute of Technology's Fluid Interfaces Lab envisions the Internet of Things empowering people to have more control over the world around them. Heun has developed an augmented reality app that will enable smart objects to talk to each other and perform multiple tasks. For example, the Reality Editor, with its "Minority Report"-style overlay, would enable a user to raise the heat in their house when they get out of bed in the morning. Users would trace their finger from a virtual circuit that raises the temperature of the smart thermostat to a circuit on their bed that can detect when they climb in or out. When an object does not offer the functionality they want, users can link it to another one that does. Another function would enable them to turn down the volume on the TV when they dim the lights. Already available for download, the smartphone app faces the problem of lack of support. No consumer products yet support Open Hybrid, its free associated developer platform.


ESnet at 30: Evolving Toward Exascale and Raising Expectations
HPC Wire (12/10/15) Tiffany Trader

Next year will mark the 30th anniversary of the U.S. Department of Energy's mission network, the Energy Sciences Network (ESnet). The network supplies high-bandwidth, reliable links that connect researchers at national laboratories, universities, and other research institutions, so they can collaborate on key scientific challenges in the fields of energy, climate science, and the origins of the universe. "The great vision that we have for networks is not only as a scientific instrument in their own right, but that they can glue together big scientific instruments like a particle accelerator or a light source and a computational facility," says ESnet director Greg Bell. Keeping pace with the growth of data is the network's most fundamental challenge, and in the last 25 years ESnet's average traffic has expanded 10-fold every 47 months, and it moved 36 petabytes of traffic in November alone. Bell notes the data's source has changed from mainly very large experiments to smaller and less-expensive projects. "It actually is a tremendous challenge to engineer the network so it can grow cost-effectively," he says. However, ESnet has maintained its support of this exponential data growth. Its current version is a 100 Gbps transcontinental/transatlantic network, and Bell and his staff are eyeing software-defined networking to upgrade network use and networking layer consolidation to be incorporated in the next iteration.


Quantum Computers Entice Wall Street Vowing Higher Returns
Bloomberg (12/09/15) Jack Clark

Wall Street is interested in quantum computers, with Goldman Sachs, Royal Bank of Scotland, and other firms currently assessing the technology. If successful, these machines could give money managers and banks an edge in a highly competitive market by helping them better allocate money to a wide range of assets, find new ways to reap profits from differences in prices across markets, and value complex derivatives structures, according to Guggenheim Partners' Marcos Lopez de Prado. "The quantum computer is very good at a few things that happen to be very, very hard for traditional computers," he notes. "It can solve complex problems in exponentially less time." For example, Google engineer Hartmut Neven reports a new algorithm developed by his company can crunch through very complex problems in seconds, while traditional computers would spend 10,000 years to solve the same problems. De Prado estimates quantum computers will cost at least $10 million each. He and other researchers noted in a recent study quantum computing can potentially help asset managers bet on a series of assets over a time horizon divided into multiple steps. "Quantum technology can evaluate all possible scenarios in order to produce an optimized portfolio that can deliver the best risk-adjusted return," de Prado says.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe