Association for Computing Machinery
Welcome to the July 13, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


China Retains Supercomputing Crown in Latest Top 500 Ranking
IDG News Service (07/13/15) Martyn Williams

China's 33,863-teraflop Tianhe-2 supercomputer retained the number-one position it has held for more than two years on the latest Top 500 ranking of supercomputers. The list, which is compiled by researchers at the University of Mannheim, the University of Tennessee, and Lawrence Berkeley National Laboratory, was released today and saw the number of U.S. supercomputers on the list dip close to an all-time low. The Cray Titan at Oak Ridge National Laboratory and the IBM Sequoia at the Lawrence Livermore National Laboratory came in second and third place, respectively. The only new machine in the top 10 is Saudi Arabia's Shaheen II supercomputer, which is ranked seventh. The Top 500 list is published twice annually and the U.S. has 231 machines on the latest list, near the all-time low of 226 it had in mid-2002. The aggregate computing power of the machines on the latest list is 361 petaflops, up 31 percent from the same time last year, but still representing an overall slowdown in the growth of computing power. Another trend evident in the new list is the increasing use of graphics-processing-unit-based computing, although Intel's Xeon series of chips still remains dominant, and is used in the vast majority of listed systems.


The Real Threat Posed by Powerful Computers
The New York Times (07/11/15) Quentin Hardy

Computer specialists are less concerned about the threat of computers becoming intelligent enough they decide to do away with the human race, than with the threat of programs rapidly overdoing a single task, with no context, at a global level. "What you should fear is a computer that is competent in one very narrow area, to a bad degree," warns Massachusetts Institute of Technology professor Max Tegmark. To prevent this, the Future of Life Institute has disbursed funding from entrepreneur Elon Musk into research to prevent autonomous systems from going rogue. Allen Institute for Artificial Intelligence CEO Oren Etzioni says most perceived doomsday scenarios assume computers achieving human-like consciousness when in fact they are much more literal-minded. He says this confusion is rooted in the persistent popularization of artificial intelligence (AI) dating back to the 1950s, when people thought thinking machines "were around the corner." The work of DeepMind, a Google subsidiary, focuses on deep learning, a form of machine learning that entails identifying patterns, suggesting actions, and making predictions. However, it is still automation and not human-like thinking. "People in AI know that a chess-playing computer still doesn't yearn to capture a queen," notes University of California, Berkeley professor Stuart Russell.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


UNC Explores Future of Transportation
WNCN.com (07/08/15) Mike Lamia

University of North Carolina (UNC) researchers are working with General Motors (GM) to create data for a more reliable, safe, and less expensive autonomous car. The UNC researchers used $300,000 in funding from GM and $1 million from the U.S. National Science Foundation to determine how to fit more computing power in a car in a realistic setting. The team's goal is to give an autonomous car the reliability of a human's split-second thinking. "If you think about the amount of information that comes to the human brain, the vast majority of it is visual information and a big chunk of our brain is dedicated to this," says UNC professor Alexander Berg. Google has been experimenting with self-driving cars since 2009. Although Google's autonomous vehicles have driven 1.8 million miles since then, they also have been involved in several accidents, but Google says all of them were the result of human error, and more research is needed. Still, most experts agree with the development of autonomous vehicle technology and believe the future of ground transportation does not have humans behind the wheel. "Folks may look back at us today and wonder, 'Why did these people ever try to drive these cars themselves?'" says UNC professor Jim Anderson.


Internet Voting Not Ready Yet, but Can Be Made More Secure
IDG News Service (07/10/15) Grant Gross

Online voting systems still lack sufficient security to ensure accurate vote counts, although election officials could take steps to improve such systems' security and transparency, according to a new report commissioned by the U.S. Vote Foundation. The study emphasizes the need for officials considering Internet voting to adopt an end-to-end verifiable Internet voting system (E2E-VIV), which would enable voters to check that the system recorded their votes correctly, it included their votes in the final count, and to double-check the announced electoral outcome. The report stresses the need for online voting systems to be transparent, secure, and usable. They also must ensure the integrity of election data and protect voters' personal information, as well as guarantee voter privacy and make sure only eligible voters vote, and "resist large-scale coordinated attacks, both on its own infrastructure and on individual voters' computers." Report co-author Joseph Kiniry notes security researchers are shifting their attitudes about Internet voting from resistance to more acceptance of the need to correct its problems. He speculates an online voting system featuring E2E-VIV could be ready for use in small-scale elections within five years, while a national U.S. implementation will require the resolution of several issues, including large-scale security, cost, and access to computers for low-income voters.


And Now, the Hopping Robot
Harvard Gazette (07/09/15) Leah Burrows

Using three-dimensional printing, engineers at Harvard University have developed a robot that integrates rigid and soft materials and can move autonomously. The robot's body can transition from soft to hard, which reduces stress where electronic components join the body and increases the robot's resiliency. The body's design was created in one continuous print job using several materials, while an absence of sliding parts or traditional joints means the robot will be less impacted by dirt or debris than conventional robots. "The robot's stiffness gradient allows it to withstand the impact of dozens of landings and to survive the combustion event required for jumping," says Harvard graduate student Nicholas Bartlett. The robot has a soft, plunger-like body with three pneumatic legs and a rigid core module that contains power and control components. To initiate movement, the robot inflates its pneumatic legs to tilt its body in the direction it wants to go. Butane and oxygen are mixed and ignited, catapulting the robot into the air up to six times its body height. The robot's jumping ability and soft body are expected to make it conducive to harsh and unpredictable environments or disaster situations.


Collaborative Photography App Allows Smartphones to Record 'Bullet Time'
Technology Review (07/08/15)

Anyone can now create "bullet-time" movies using collaborative photography techniques on their smartphones, making it appear a person is dodging bullets in slow motion. Columbia University researcher Yan Wang and colleagues have developed a mobile app called CamSwarm that enables users to record bullet-time action. The app uses a local Wi-Fi network to coordinate multiple cameras. Each smartphone in the "swarm" must run the app, with one designated as a leader. The app generates a quick response code that others photograph to join the group. The video from all participants is then streamed to a cloud server, which stores the data and helps direct the swarm. The app shows each user their own footage along with the footage from cameras next to them in space, enabling them to adjust their spacing and orientation so each camera has a slightly overlapping field of view. The app also provides on-screen guidance to help users achieve the right spacing and orientation, and it replays all the footage to let users select which sequences of camera angles to use to create the bullet-time effect.


Artificial Intelligence Can Improve Radioactive Particles' Detection
University of Stirling (07/07/15) Esther Hutcheson

New research indicates radioactive particles can be detected deeper in the ground using artificial intelligence technology. Researchers at the University of Stirling have developed an algorithm that can more effectively separate potentially dangerous "hot particle" signals from benign, natural ones. Data-analysis techniques enabled the maximum amount of information possible to be extracted from handheld and mobile detection systems. "Particles can be detected on average 10 cm deeper into the ground when compared to conventional systems," says Stirling researcher Adam Varley. "The unique algorithm can, at existing depths, also pick up hazardous particles on lower levels on the radioactive scale than previously possible, enabling more to be identified." The researchers note their breakthrough was developed using background readings taken at Fife's Dalgety Bay in Scotland. The research was funded by the Natural Environment Research Council and conducted in partnership with the Scottish Environment Protection Agency (SEPA) and Nuvia. "Dalgety Bay has brought to the fore the need for alternative techniques to identify radioactive contamination which is buried at depth," says SEPA's Paul Dale. "Any improvements to the detection capability should therefore be welcomed, as it will ultimately reduce the hazards posed to the public."


A Beautiful Algorithm? The Risks of Automating Online Transactions
UVA Today (07/07/15) Caroline Newman

New research into the automation of online advertising has implications for almost any sector using automated algorithms to negotiate an optimum. In theory, automated auctions should arrive at an optimum price via John Nash's equilibrium principle. However, University of Virginia professor Denis Nekipelov and colleagues have found the Nash equilibrium is unrealistic for today's market because bidding strategies can not remain static enough to achieve equilibrium. The researchers studied a new approach using learning algorithms--equations that make real-time adjustments based on the results of each auction--and found most advertisers are bidding at 60 percent of their value. The team took this inefficiency to extremes, even likening it to scenarios with machines limiting human intervention. For automated driving, algorithms that are unable to negotiate an optimum could snarl traffic. Analyzing data from Microsoft's Bing search engine, Nekipelov and his collaborators could predict how different learning algorithms would behave over time, creating a roadmap that will enable human beneficiaries to forecast and even reverse-engineer how a market will behave. "The current results are pretty promising," Nekipelov says. "If the algorithms are reasonable from a computer science standpoint, we can predict what the outcomes will be."


Researchers Build First Working Memcomputer Prototype
Phys.org (07/06/15) Bob Yirka

Researchers from the University of California and Italy's Politecnico di Torino say they have built the first working prototype of a memory-crunching computer (memcomputer). Modern computers solve problems by crunching numbers and storing results in completely separate processes, but a memcomputer would perform these functions simultaneously, and thus would work more like the human brain. The team developed a new computer design based on what they call memprocessors, which have the ability to change their own properties--such as electrical resistance--depending on factors such as how much energy is passing through. As a result, the processors are able to store information in a new way, even as they continue processing. The team used the prototype to solve the NP-complete version of the subset sum problem, proving that it works. The researchers note the prototype suffers from noise, which prevents it from being scaled up. However, they say improvements could lead to dedicated machines that are capable of performing individual tasks really well.


Study: Smartphone Use May Be Detrimental to Learning
Rice University (07/07/15) Amy McCaig

A year-long study of first-time smartphone users by Rice University and the U.S. Air Force found users believed the devices were detrimental to their ability to learn. The study focused on the self-rated impact of smartphones among 24 first-time users at a major research university in Texas. The participants were given no training on smartphone use and were asked to rate statements on how they thought a smartphone would impact their school-related tasks. The students then received iPhones, and their phone use was monitored during the following year. At the end of the study, the students rated such statements as "My iPhone will help/helped me get better grades." The average answer was 3.71 in 2010, but declined to 1.54 in 2011, where 1 represents "strongly disagree" and 5 represents "strongly agree."  When asked to rate the statement, "My iPhone will distract/distracted me from school-related tasks," the average answer in 2010 was 1.91, but increased to 4.03 in 2011.  "Our research clearly demonstrates that simply providing access to a smartphone, without specific directed learning activities, may actually be detrimental to the overall learning process," says Rice professor Philip Kortum.


Cutting Big Data Down to a Usable Size
University of Illinois at Urbana-Champaign (07/06/15) Claudia Lutz

A recent $1.3-million award from the U.S. National Institutes of Health will enable researchers at the University of Illinois (UI) and Stanford University to develop novel data-compression strategies. UI professor Olgica Milenkovic and Stanford professor Tsachy Weissman are co-principal investigators on the project. Milenkovic says their goal is "to develop a suite of software solutions for the next generation of biological data repositories and labs, which are currently facing enormous challenges with data storage, transfer, visualization, and wrangling." The resulting suite of data compression software will handle several types of genomic data, including DNA sequence data, metagenomic data, quality scores for sequences, and data from gene functional analyses, such as RNA-Seq. Some genomic data sets are linked to a reference such as a genome sequence from a similar species to which DNA or RNA sequences can be compared.  A reference-based algorithm can be used to encode the differences between these sequences and the reference to significantly reduce the size of the data set.  The researchers will explore strategies that combine compression algorithms.  A key part of the project will be developing parallel-processing strategies to decrease the wait time for users of the resulting software.


Smart Innovations to Enhance Citizens' Health Care, Quality of Life
Government Technology (07/02/15) Nicola Davies

Experts at the 2015 Fourth International SMART Conference presented 19 technical papers on the use of smart technology, sensors, and other specialized technologies, which increasingly are being used to improve people's health and well-being. The technologies range from increasingly ubiquitous fitness trackers to more specialized devices and solutions. Some of the solutions include the use of personal mobility vehicles, driving controls, geolocators for those with mental issues, and transportation enhancements for the visually impaired. The constant stream of new medical smart devices is being driven by an influx of people into cities and the growing population of older people, many of whom are living with chronic medical conditions that smart technology can help to manage. Although many people also are living longer with once-fatal conditions, there are still major limitations with medical technology. One of these, according to researcher Renata Souza, is that many medical systems lack the ability to talk to one another. For example, Souza says this makes entering medical test data into a patient's electronic medical record more difficult than it needs to be. There also are security fears surrounding smart health care devices. "The acceptability of smart technology hinges upon the assurance that the security and privacy of health information gathered through the use of these smart devices will be respected and protected," Souza says.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe