Association for Computing Machinery
Welcome to the July 17, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Data Miners Dig for Answers About Harper Lee, Truman Capote, and 'Go Set a Watchman'
The Wall Street Journal (07/15/15) Ellen Gamerman

The release of Harper Lee's new novel, "Go Set a Watchman," has been a sensation in the literary world, the first new book from the author since her classic "To Kill a Mockingbird" was published more than half a century ago. A pair of literature researchers took the new book's release as an opportunity to use data science to investigate long-standing debates about Lee and her work. Many have long theorized that substantial portions of "Mockingbird" were written not by Lee, but by her childhood friend and colleague Truman Capote, and many have wondered how authentic the new book is. Researchers Jan Rybicki and Maciej Eder fed "Watchman," "Mockingbird," and two of Capote's books into software that analyzes word–usage patterns and found that "Watchman" shows more of Lee's authorial voice than "Mockingbird." The research confirms the publisher's claims that "Watchman" is an only lightly-edited version of the book Lee originally presented to her publisher two years before the release of "Mockingbird" and that it served as that book's inspiration. Rybicki and Eder's analysis also found that sections of "Mockingbird," especially the book's climactic scene, seem more similar to Capote's style than Lee's, although this could mean that Lee heavily rewrote those sections, and not that they were written by Capote.
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 


RoboCup World Championship: UNSW Student Engineers Take Robots to China to Defend Title
Australian Broadcasting Corporation (07/16/15) Lindy Kerin

A University of New South Wales (UNSW) team of student engineers are in Hefei, China, to defend their Standard Platform League title at the RoboCup World Championships. Each team has been given the same waist-high robots to play on a nine-meter-long field, but each team must design their own software to control the robots. "In our competition it's a standard platform, so everyone purchases the same robots and then it's all about the artificial intelligence and the programming and the smarts that you give the robot," says UNSW team leader and Ph.D. student Sean Harris. This year's competition has been changed so the game more closely resembles a real soccer match, making the competition much more difficult. For example, the goal posts used to be yellow, but "now they're white so they just look like everything else on the field--all the robots are white and all the lines are white --it's a very common color and it's very hard for the robots to see the goal posts," Harris says. In addition, this year's games start with a whistle, so the robots also have to be able to listen for a whistle sound.


Virginia Tech Scientist Develops Model for Robots With Bacterial Brains
Virginia Tech News (07/16/15) Amy Loeffler

A Virginia Polytechnic Institute and State University scientist demonstrated that bacteria can control the behavior of an inanimate device such as a robot. Professor Warren Ruder used a mathematical model that described engineered gene circuits in E. coli, microfluid bioreactors, and robot movement. "Basically, we were trying to find out from the mathematical model if we could build a living microbiome on a nonliving host and control the host through the microbiome," Ruder says. The finding suggests robots may be able to function with a bacterial brain. The bacteria in the experiment exhibited their genetic circuitry by turning either green or red, according to what they ingested. In the mathematical model, the theoretical robot was outfitted with sensors and a miniature microscope to measure the color of bacteria telling it where and how fast to go depending upon the pigment and intensity of color. Ruder says the model revealed unique decision-making behavior and surprising high-order functions by a bacteria-robot system. He says biochemical sensing between organisms could have a big impact on ecology, biology, and robotics. In agriculture, bacteria-robot model systems could greatly improve research into the interactions between soil bacteria and livestock. For future experiments, Ruder is building real-world robots that will respond to bacteria engineered in his lab.


Once-Theoretical Crypto Attack Against HTTPS Now Verges on Practicality
Ars Technica (07/15/15) Dan Goodin

Research that will be presented at the 24th USENIX Security Symposium next month in Washington, D.C., demonstrates an increasingly effective method of attacking the cryptographic cipher known as RC4, which is used in almost a third of the Web's encrypted connections. Researchers have refined an attack on RC4 that was first demonstrated in 2013 and exploits a weakness in the cipher to guess the contents of data encrypted using it. The original attack was able to correctly guess the contents of a typical authentication cookie in about 2,000 hours, but the refinement to the attack was able to reduce the time needed to 75 hours with 94-percent accuracy. A similar attack against the Wi-Fi Protected Access Temporal Key Integrity Protocol (WPA-TKIP) was able to crack a Wi-Fi network in about an hour. The researchers say their ability to improve the attack is "very worrisome," and note there is likely even more room for improving these attacks. As a result, they recommend engineers migrate away from using RC4 completely, a shift that is already underway. Although 30 percent of HTTPS sessions are estimated to rely on RC4 today, that is down from about half of HTTPS sessions in 2013.


NASA Algorithms Keep Unmanned Aircraft Away From Commercial Aviation
Network World (07/14/15) Michael Cooney

New algorithms developed at the U.S. National Aeronautics and Space Administration's (NASA) Langley Research Center could enable large unmanned aircraft to remain "well clear" of commercial airliners in flight and prevent a disaster. Unmanned systems lack the onboard technology, as well as air traffic controllers and live pilots, of commercial airliners and many larger private planes. NASA has developed detect-and-avoid algorithms and is testing the technology in multiple research experiments. One system, known as Detect and Avoid Alerting for Unmanned Systems (DAIDALUS), uses algorithms to process incoming traffic surveillance sensor data that some larger unmanned aircraft have onboard. DAIDALUS provides alerts and even maneuver guidance for the unmanned system pilot on the ground. The system uses algorithms to compute the time interval of well-clear violation, ranges of speed maneuvers, as well as ranges of horizontal and vertical maneuvers to assist pilots. DAIDALUS is essentially designed to "see" safe paths out of potentially dangerous situations, according to NASA. "In the case of a predicted well-clear violation, DAIDALUS also provides an algorithm that computes the time interval of well-clear violation," note NASA researchers. "Furthermore, DAIDALUS implements algorithms for computing prevention bands, assuming a simple kinematic trajectory model."


I am Woman, Hear Me Code
Federal Computer Week (07/14/15) Bianca Spinosa

The first all-woman hackathon in Washington, D.C., was held in December 2013, when 100 women filled all the sessions of the Tech Lady Hackathon to capacity. The following year's event, held on July 26 at Google's D.C. office, was attended by more than 150 women. With a few weeks left before the third annual Tech Lady Hackathon, to be held Aug. 8 at Impact Hub DC, there are already 1,000 women on the event's listserv. The hackathon caters to developers of all skill levels and includes workshops about learning the basics of various programming languages. The Tech Lady Hackathon is the brainchild of Leah Bannon, a product manager at the U.S. General Services Administration's 18F digital services lab. Bannon started out as a social media specialist and began teaching herself how to code after she started managing a website. She also learned programming skills at small coding sessions organized by developer Shannon Turner, who grew these sessions into her successful Hear Me Code series of free programming courses for women. Bannon says both experiences informed the creation of the Tech Lady Hackathon, and she is proud that some of the event's novice attendees have gone on to jobs as developers.


IU Researcher Devises Method to Untangle, Analyze 'Controlled Chaos'
IU Bloomington Newsroom (07/13/15) Kevin Fryling

Indiana University professor Filippo Radicchi has developed a mathematical framework to more effectively analyze controlled chaos, or how interactions among highly complex systems affect their operation and vulnerability. Radicchi says the method could be used to improve the resilience of complex critical systems, or to slow the spread of threats across large networks. "By providing reliable results in a rapid manner, these equations allow for the creation of algorithms that optimize the resilience of real interdependent networks," he says. The equations work by providing a new method to untangle multiple complex systems. The equations pull apart each network, or graph, for individual analysis, and then reconstruct an overall picture. "By unraveling multiple graphs, we're able to analyze each in isolation, providing a more complete picture of their interdependence and interaction," Radicchi says. He notes the equations are not dependent on the use of large-scale simulations, and they are able to quickly and accurately measure "percolation" in a system, a term that describes the amount of disruption caused by small breakdowns in a large system. Radicchi says the equations could be used to detect vulnerabilities in a transportation network, to create plans to reduce construction costs, or to better understand other complex systems that remain resistant to breakdown.


Firing Squad Synchronization, Computer Science's Most Macabre-Sounding Problem
Motherboard (07/14/15) Ben Richmond

Getting a firing squad to fire in sync is a puzzle studied in computer science's early days, because it was vital to automata theory. California State University professor Darin Goldstein says programming a computer to solve the problem must allow for synchronization without counting or even knowing the number of soldiers in the firing squad. The solution to the problem, worked out by computer science pioneers John McCarthy (an ACM A.M. Turing Award recipient) and Marvin Minsky in the early 1960s, was to send out multiple messages at differing speeds, one going three times faster that the other, enabling the first message to reach the other side of the line, bounce back and reach the other right at the line's mid-point. Goldstein says when the messages intersect, the soldier in the middle becomes another general, creating two lines. "And then as soon as you have that, you go again, you keep splitting the line in two over and over and eventually every soldier will consider himself a general," he notes. "And as soon as they all know the guy to left and right is a general, they fire." This renders the line into a grid, the solving of which produces a three-dimensional object, Goldstein says. "The most general problem is the strongly-connected directed graph and that was solved multiple times," he points out.


An Algorithmic Sense of Humor? Not Yet.
Technology Review (07/13/15)

Major gains continue to be made in artificial intelligence (AI), yet there are still some human faculties that remain beyond the ability of AI to emulate, one of the more prominent being humor; this is largely because humor is a very subjective phenomenon. What makes one person laugh may not make another person laugh. However, many linguists and psychologists say there are common properties than underlie good jokes, suggesting the possibility there might be some way to write a reliable humor algorithm. Nevertheless, even the latest efforts to systematically analyze humor have failed to offer anything quite that compelling. A recent study by researchers at the University of Michigan in Ann Arbor, Yahoo Labs, and Columbia University, working with the "New Yorker" magazine, sought to uncover the recipe for a funny cartoon caption by analyzing thousands of reader-submitted captions. The analysis broke down the captions on multiple levels, examining how human-focused they were and whether they expressed positive or negative sentiments. The researchers found the funniest captions tended to express negative sentiments, were human-centered, and showed "lexical centrality." However, none of the results brought the researchers closer to creating an automated captioning system.


Neuroscience-Based Algorithms Make for Better Networks
Carnegie Mellon University (07/09/15)

The human brain's pruning process could be beneficial for engineering networks, according to a group of computer scientists from Carnegie Mellon University (CMU) and the Salk Institute for Biological Studies. The team studied the way neurons create networks and found the brain rapidly prunes synapses early in development and slows the rate of pruning as time progresses. Engineers take the opposite approach to building distributed networks of computers and sensors, as networks initially contain a small number of connections and more are added as needed. The researchers designed an algorithm based on the brain's approach, and simulation and theoretical analysis revealed it is much more efficient and robust than engineering methods. The researchers say the flow of information is more direct, and it provides multiple paths for information to reach the same endpoint, minimizing the risk of network failure. "We took this high-level algorithm that explains how neural structures are built during development and used that to inspire an algorithm for an engineered network," says CMU professor Alison Barth. "It turns out that this neuroscience-based approach could offer something new for computer scientists and engineers to think about as they build networks."


Robotics and the Law: When Software Can Harm You
UW Today (07/13/15) Peter Kelley

University of Washington School of Law professor Ryan Calo emphasizes the importance of the law developing some way to effectively contend with the advent of robots and artificial intelligence (AI) in an article published in the California Law Review. "The widespread distribution of robotics in society will, like the Internet, create deep social, cultural, economic, and of course legal tensions," he predicts. Calo says robotics will raise different cyberlaw issues than the Internet, given the essential difference between the two. He notes robotics mixes for the first time the abundance of data with the capability of inflicting physical harm. "Robotic systems accomplish tasks in ways that cannot be anticipated in advance, and robots increasingly blur the line between person and instrument," Calo warns. In 2014, he urged the establishment of a federal robotics commission, and his latest conclusion is that AI and robotics possess basically different qualities than the law has yet confronted. "Cyberlaw will have to engage, to a far greater degree, with the prospect of data causing physical harm, and to the line between speech and action," Calo says. "Rather than think of how code controls people, cyberlaw will think of what people can do to control code."


U-M Will Test 3D-Printed, Autonomous 'SmartCarts'
University of Michigan News Service (07/09/15) Gabe Cherry

University of Michigan (U-M) researchers are working on the SmartCarts project, which aims to understand the challenges of a transportation-on-demand system built around autonomous cars. Over the next year, U-M researchers will develop autonomy capabilities and build a three-dimensional printed mobile device interface that can be used to request a ride. The researchers will test the vehicle at Mcity, an autonomous and connected vehicle test site operated by the Mobility Transformation Center, a public-private partnership headquartered at U-M. "On this project, we're deliberately 'cheating' on the autonomy as much as we can--not because we can't build autonomous cars, but because we need a working test bed now so that we can begin to look at all of the other challenges of an on-demand system," says U-M professor Edwin Olson. The researchers will face challenges including understanding passengers' preferences and expectations, coordinating the routes of a fleet of vehicles, and determining how to balance supply and demand. "These factors--not just the self-driving technology--are critical to the economic viability and social acceptance of a full-scale transportation service," Olson says. The researchers initially will focus on finding methods to simplify the autonomy challenges that take advantage of the smaller scale of a college campus.


The Hard Disk of the Future Will Be Ten Thousand Times Faster, Researchers Say
Forbes (07/10/15) Federico Guerrini

It is widely believed that read and write speeds for traditional magnetic hard disks have reached their limit and cannot be reduced below 1 nanosecond. However, this limit has inspired numerous researchers to investigate magnetic phenomena below the nanosecond timescale, creating a new field of research known as femtomagnetism. Now researchers in this field believe they have made a breakthrough that could yield hard disks with read and write speeds thousands of times faster than what is available today. Researchers at La Sapienza University in Rome, along with collaborators at the Polytechnic of Milan and Radboud University of Nijmegen, have used ultra-fast laser pulses to directly modify the magnetic interactions between atoms. "Our method shows that it's possible to use ultra-fast laser pulses to read and write data without causing the thermal effects which would inevitably slow down the process," says La Sapienza's Tullio Scopigno. The laser pulses occur on a timescale of less than 100 femtoseconds, which means that, in theory, this method could be used to create hard disks with read and write speeds up to 10,000 times faster than those currently available.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe