Association for Computing Machinery
Welcome to the December 2, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Six Seconds to Hack a Credit Card
Newcastle University (UK) (12/02/16)

New research by a team from the U.K.'s Newcastle University demonstrates that hackers can compromise the credentials of any Visa debit or credit card in as little as six seconds through guesswork alone. The Distributed Guessing Attack involves automatically and systematically generating different variations of cards' security data and targeting it at multiple websites. Newcastle postdoctoral student Mohammed Ali says the attack exploits the fact that the current online payment system does not spot multiple invalid payment requests from different websites. Moreover, different sites ask for different variations in the card data fields to confirm an online purchase, making it easier to accumulate this data and assemble it into the right card details, one field at a time. The Newcastle team says such hacks are "frighteningly easy if you have a laptop and an Internet connection." Ali notes attackers use online payment websites to guess the data, and the reply to the transaction will verify whether the guess is correct or not. The researchers also learned that only the Visa network is susceptible to this form of attack, while MasterCard's centralized network can identify such exploits after less than 10 attempts.


Big Data Analytics--Nostradamus of the 21st Century
Griffith University (11/30/16) Stephanie Bedo

Researchers from Australia's Griffith University accurately predicted who would win 49 of the 50 states in the U.S. presidential election using social media comments and data. The researchers note the massive amount of information online concerning the election provided a rich source of data about what people were thinking and feeling about the election. "My algorithms showed clearly to me that based on past patterns and sentiment in social media that [Donald] Trump, by Nov. 8, would take over the lead, despite only having a 10-percent chance to win according to all polls at that time," says Griffith professor Bela Stantic, director of the Big Data and Smart Analytics lab within the university's Institute for Integrated and Intelligent Systems. The day before the election, Stantic correctly predicted Trump would win the swing states of Florida, North Carolina, and Pennsylvania, even though polls indicated Hillary Clinton was an 84-percent favorite. Stantic says people are likely to be more honest when telling friends rather than answering polls. "It is scary how accurate prediction can be done by analyzing social media," he notes. The researchers want to improve the predictive power of big data analytics by developing smarter and faster deep-learning algorithms to analyze large volumes of data drawn from diverse sources.


New Supercomputer Will Unite x86, Power9, and ARM Chips
IDG News Service (11/30/16) Agam Shah

Spain's Barcelona Supercomputing Center (BSC) is building the MareNostrum 4, an experimental high-performance computer that will integrate x86, ARM, and Power9 chips in three clusters bound together to deliver a maximum of 13.7 petaflops. Linux's support of the three chip architectures makes it possible to write cross-architectural applications. In addition, new networking and throughput interfaces such as Gen-Z and OpenCAPI will enable the installation of multi-architecture servers in a single data center. Scott Tease, executive director of Lenovo's HyperScale and High-Performance Computing group, says MareNostrum 4 will share common networking and storage components. The system will incorporate Lenovo server cabinets with Intel's current Xeon Phi supercomputing chip code-named Knights Landing, and a forthcoming chip code-named Knights Hill. It also will be equipped with racks of computing nodes with IBM Power9 chips scheduled to ship in 2017. ARM's contribution to the system will be a new high-performance computing chip with vector processing capabilities. The supercomputer, which will have a storage capacity of 24 petabytes and will be implemented in phases, will replace the MareNostrum 3.


Why a Hacker Is Giving Away a Special Code That Turns Cars Into Self-Driving Machines
The Washington Post (11/30/16) Elizabeth Dwoskin

The founder of startup Comma.ai has released a free software kit to the developer community in an effort to accelerate autonomous vehicle technology without running afoul of regulators. Comma.ai founder and hacker George Hotz shifted strategy when his original plan to sell the do-it-yourself self-driving hardware/software kit was complicated because the U.S. National Highway Traffic Safety Administration (NHTSA) demanded details about the product's safety. Hotz says, "we want to be the Android operating system for self-driving cars." The code, available on GitHub, makes it possible for anyone to build a three-dimensionally printable dashboard camera-like device for their automobile, which plugs into the car's controller area network. When activated, the vehicle enters an autopilot mode, enabling the driver to take their hands off the wheel and the accelerator. The car also will stay in its lane and brake by itself, with the camera scanning the road. Despite warnings from NHTSA and other watchdogs, Hotz says the kit is exempt from self-driving regulations and dismisses claims the product could enable hackers to break into cars. "This [tool] is for tinkerers," he says. "These are people who, if they wanted to do bad things, they already could."
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Optical Addressing Method for Full-Color 3D Display
SPIE Newsroom (11/25/16) Ryuji Hirayama; Atsushi Shiraki; Takashi Kakue; et al.

Full-color, three-dimensional (3D) images can be displayed based on position- and color-selective color transformations in the volume of photochromic materials (PMs) thanks to a new optical addressing technique. The system is based on PMs that are initially transparent because of their molecular structure. Irradiating the molecule with ultraviolet (UV) light alters this structure and its absorption properties to introduce color. The PMs can be reverted to their initial transparent state via irradiation with visible light (Vis) of the appropriate wavelength. The volume space in which the 3D images are generated combines multiple PMs that assume different colors--cyan, magenta, and yellow--when struck with UV light. The position-selective coloration of the PMs can be induced at any depth by guiding the UV-irradiation pattern that is incident on the material, while color selectivity is determined by the spectrum-dependent decoloration via Vis. A proposed volumetric display system incorporating the demonstrated PM principles has UV/Vis irradiation synchronously controlled by two space-light modulators (SLMs). The UV SLM sweeps a two-dimensional image rendered by the Vis SLM along the depth direction, and voxels irradiated by white light, or not irradiated by UV light, are colorless. The color of the other voxels is influenced by the intensity ratio of the three primary colors in the Vis.


Four Million Commutes Reveal New U.S. 'Megaregions'
National Geographic News (11/30/16) Betsy Mason

Geographers from Dartmouth College and the U.K.'s University of Sheffield are providing new insight into the geography of commuter megaregions in the U.S. by combining visual interpretations and algorithmic analysis. Megaregions, or clusters of interconnected cities, map out complex networks in which economies and infrastructure are linked. Understanding geographic connections and boundaries is vital for lawmakers, economists, and urban planners developing policy and managing transportation. The study examined 4 million commutes using census data, and a visual approach filtered commutes based on distance, with 50 miles as the top threshold. The visual data provided a depiction of centers of employment and suburban commutes, but it was difficult to determine the boundaries of a commuter region or identify statistically significant connections. Researchers then turned to an algorithm-based tool designed by the Massachusetts Institute of Technology's Senseable City Lab. The algorithm relies on the strength of the connectivity between nodes or communities, and does not take into consideration geographic proximity. The researchers eliminated outliers from the algorithmic analysis, such as nodes with very weak connections and long commutes between New York City and Los Angeles; the two approaches then were combined to create the final boundaries around the U.S.'s 50 megaregions.


The Biggest Challenge to Diversifying Tech Talent
CNet (11/30/16) Erin Carson

Progress in diversifying the technology industry has been steady but sluggish, and it is a challenge to maintain the movement's traction, according to experts. An Accenture report estimated women alone could miss out on $299 billion by 2025 if more serious diversity measures are lacking, while the American Institute for Economic Research found salaries for Asian, Black, and Hispanic tech workers trail those of their white peers. "People shouldn't have the expectation that next year it's going to be parity," says the Anita Borg Institute's Elizabeth Ames. She looks for signs of progress with the issuance of corporate diversity reports, with specific concentration on new hires, employee retention, and the number of women/minorities in leadership positions. However, Harvey Mudd College president Maria Klawe, a former president of ACM, says a 1-percent increase in female or minority headcount at tech companies usually reflects a lack of serious effort, which leads to demoralization. Still, for businesses with a big installed base workforce, 1 percent represents a significant gain, says Intel's Danielle Brown. National Center for Women and Information Technology researcher Catherine Ashcraft says strategic planning--which entails funding, buy-in from the executive level, and the ability to measure and adjust progress--is essential to successful diversity initiatives. Brown says culture is another factor that must be considered to encourage retention.


An AI Ophthalmologist Shows How Machine Learning May Transform Medicine
Technology Review (11/29/16) Will Knight

Google computer scientists and medical researchers from the U.S. and India have demonstrated that an algorithm can detect diabetic retinopathy as well as a highly trained ophthalmologist can. Unlike existing ophthalmology software, the algorithm was not explicitly programmed to recognize features in images that might indicate the common eye disease. The team designed the algorithm to look at thousands of healthy and diseased eyes, and then determine for itself how to spot the condition. The researchers trained the algorithm on a set of 128,000 retinal images classified by at least three ophthalmologists. The team then tested its ability to identify the condition and grade the severity on 12,000 images. The researchers found the algorithm matched or exceeded the work of experts in the performance of such tasks. "One of the most intriguing things about this machine-learning approach is that it has potential to improve the objectivity and ultimately the accuracy and quality of medical care," says professor Michael Chiang at Oregon Health & Science University's Casey Eye Institute. University of Toronto professor Brendan Frey says researchers will need to develop machine-learning systems that can explain how they reached a particular conclusion.


It's No Christmas No. 1, but AI-Generated Song Brings Festive Cheer to Researchers
The Guardian (11/29/16) Ian Sample

Researchers from Canada's University of Toronto have developed "neural karaoke," a system that can take any digital photograph and transform it into a computer-generated song. Neural karaoke is based on a broader research effort to use software to make music, write lyrics, and generate dance routines. The researchers trained a neural network on 100 hours of online music, after which it was able to analyze a musical scale and melodic profile, and produce a simple 120-beats-per-minute melody before adding chords and drums. The researchers also trained the system on an hour of footage from the video game "Just Dance." The network tracked human poses and learned to connect moves with music. The system also combines the two programs to create a digital stick figure that can dance to the neural karaoke song. An additional hour of training on "Just Dance" and 50 hours of song lyrics from the Internet helped teach the program how to put words to music. The program developed a vocabulary of 3,390 words, which the computer could string together at a rate of one word per beat. The program also was trained on a collection of pictures and their captions to learn how specific words can be linked to visual patterns and objects.


Stanford Engineers Create Prototype Chip Just Three Atoms Thick
Stanford News (11/29/16) Tom Abate

A team of Stanford University engineers led by professor Eric Pop has demonstrated the possibility of mass-producing a three-atom-thick semiconductor of molybdenum disulfide as an alternative to silicon. By refining the process of chemical vapor deposition, the team fabricated a crystalline sheet 25 million times wider than it is thick. Among the potential electronic advances this process could realize are windows that also function as TVs, or heads-up displays on car windshields, says Stanford graduate student Kirby Smithe. Once the sheet was created, the researchers had to pattern the material into electrical switches. They found very clean deposition conditions can give rise to solid metallic contacts with the molybdenum disulfide layers. The team has created precise computer simulations of the new materials and has started predicting how they behave as circuit elements. The researchers also successfully etched the Stanford logo into the prototype chip with standard tools, as well as carving the likenesses of both 2016 U.S. presidential candidates into the three-atom-thick sheet. "We have a lot of work ahead to scale this process into circuits with larger scales and better performance," Pop says. "But we now have all the building blocks."


Creating Videos of the Future
MIT News (11/28/16) Adam Conner-Simons; Rachel Gordon

Researchers at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a deep-learning algorithm that can take a still image of a scene and model a video simulating the future of that scene. The algorithm was trained on 2 million unlabeled videos compiling a year's worth of footage, and it produced videos that were deemed by human subjects to be 20-percent more realistic than a baseline model. CSAIL postdoctoral student Carl Vondrick says the algorithm can help machines identify human activities without costly annotations. The researchers taught the model to create multiple frames by producing the foreground separate from the background, and then positioning objects in the scene so the model can differentiate animate from inanimate objects. The adversarial learning technique, in which two competing neural networks are trained, was used. One network generates video while the other discriminates between actual and simulated videos. Vondrick says over time the generator can learn to deceive the discriminator. "In the future, this will let us scale up vision systems to recognize objects and scenes without any supervision, simply by training them on video," he says.


Self-Learning Software That Builds Itself
Government Computer News (11/28/16) Patrick Marshall

A machine-learning system developed by researchers at the U.K.'s Lancaster University can assemble code components into a program to meet the goals set by a human developer. The resulting software can continue to learn and reconfigure itself to adapt to changing conditions without human intervention. The team tested the runtime emergent software (REx) on a Web server and found it performed efficiently and even offered some better solutions. REx consists of a component-based programming language called Dana that enables the system to find, select, and adapt building blocks of function-specific code, and a perception, assembly, and learning framework (PAL) that configures the components and measures their behaviors. REx's third layer is a learning module that uses the collected information to determine the best configuration of the blocks of code. The team now is focused on eliminating the need for human coders altogether. They also are looking for ways to set REx's program development goals using natural language instead of high-level coding. Self-assembling and self-learning software could make an impact in robotics and many other areas, notes Lancaster lecturer Barry Porter. "Machines will need to draw on a large range of possible software behaviors, and their combinations, to achieve goals in the environments that they encounter," Porter says.


Intelligence Rethought: AIs Know Us, but Don't Think Like Us
New Scientist (11/23/16) Nello Cristianini

Machine-learning artificial intelligences (AIs) become more capable with experience, but the trade-off is a lack of understanding about the nature of their intelligence, writes Nello Cristianini, a professor of artificial intelligence at the University of Bristol in the U.K. Cristianini says modern AI systems can imitate complex human behaviors that cannot be fully modeled, but in a manner dissimilar from what people do. For example, automated customer-service agents adapt their behavior to various signals collated from customers' actions so they can constantly learn and monitor their preferences. To contend with novel situations, the AIs must be able to generalize, using data from similar customers or products in a form of pattern recognition. A key challenge in machine learning is selecting the right features to correctly recognize patterns, and engineers are using deep-learning techniques instead of programming this ability directly into computing. The concurrent and ongoing application of these mechanisms on a massive scale induces highly adaptive behavior that appears intelligent, yet AI systems do not need the type of self-awareness that humans consider the mark of actual intelligence. One example is machine-translation systems, which use statistics to translate instead of following linguistic rules. Cristianini says the results are computers capable of translating accurately without revealing how humans derive meaning from sentences.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]