Association for Computing Machinery
Welcome to the November 10, 2014 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


API Copyrights a "Threat" to Tech Sector, Scientists Tell Supreme Court
Ars Technica (11/09/14) David Kravets

Dozens of computer scientists, including former ACM president Vint Cerf, are urging the U.S. Supreme Court to overturn a May federal appeals court decision that said application programming interfaces (APIs) are subject to copyright protections. APIs are "specifications that allow programs to communicate with each other," according to the Electronic Frontier Foundation, the organization representing the scientists. The court battle started when Google copied certain elements, including names, declaration, and header lines, of the Java API in Android, and Oracle sued. In 2012, a federal judge largely sided with Google, saying the code in question could not be copyrighted. However, the federal appeals court reversed the decision, ruling the "declaring code and the structure, sequence, and organization of the API packages are entitled to copyright protection." The decision was "a win for the entire software industry that relies on copyright protection to fuel innovation," according to Oracle. Still, analysts say the dispute is far from over, even if the Supreme Court refuses to hear the case, as the appellate court's ruling did not necessarily hold Google monetarily liable for infringement. The appeals court returned the case to the lower courts to determine if Google had "fair use" rights to the APIs.


U.S., European Authorities Strike Against Internet's Black Markets
The Washington Post (11/07/14) Craig Timberg; Ellen Nakashima

U.S. and European law enforcement agencies last week launched a massive, coordinated strike on the so-called Dark Web, taking down hundreds of illicit websites selling goods ranging from drugs to explosives. The strike began Wednesday with the arrest in San Francisco of Blake Benthall, who is alleged to be behind Silk Road 2.0. On Thursday and Friday, authorities in the U.S. and 16 European nations followed that up with a coordinated shutdown of 410 websites operating on the Tor anonymous browsing network, many of which carried out transactions using hard to trace virtual currencies. Seized sites were operated out of numerous European countries, including England, Germany, and France. Known as Operation Onymous, the opposite of anonymous, the operation was two years in the making and it remains unknown exactly how the law enforcement agencies were able to bypass Tor and identify their targets. Speculation ranges from the use of informants to a massive de-anonymization of Tor user traffic, something that leaks by Edward Snowden suggest the U.S. National Security Agency has been working on for some time. "There are no guarantees of anonymity," says Columbia University professor Steve Bellovin. "It's clear that buying [illicit goods] on something like Tor is not as safe as people thought a year ago."
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Google's New Open Source Privacy Effort Looks Back to the '60s
The Wall Street Journal (11/07/14) Elizabeth Dwoskin

Privacy has been a major concern for proponents of big data analytics for a long time as they consider how to make massive data sets with potentially identifiable information in them available for companies, government agencies, and others without potentially compromising people's privacy. Last week, Google announced RAPPOR, a new open source tool they hope will solve this privacy issue for massive data sets. RAPPOR is based on differential privacy, a technique developed in the 1960s by researchers seeking a way to gather information about sexually transmitted diseases without violating people's privacy. In its original form, the technique involved asking users to answer a yes/no question by flipping a coin: if heads, they answered yes, if tails, they answered truthfully. The technique enabled the researchers to statistically compute the occurrence of yes answers, without needing to know who answered yes or no. RAPPOR applies that technique to large data sets, starting with Google user statistics. Normally Google would have to examine data tied to specific users to determine, for example, how many users block tracking cookies on its Chrome browser. However, RAPPOR enables Google to do that without being able to identify the preferences of any given individual.
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Next for DARPA: 'Autocomplete' for Programmers
Rice University (11/05/14) Jade Boyd

Rice University researchers have launched an $11-million initiative, funded by the U.S. Defense Advanced Research Projects Agency (DARPA), to create PLINY, a tool that will both autocomplete and autocorrect code for programmers. The PLINY project will involve more than 24 computer scientists from Rice, the University of Texas-Austin, and the University of Wisconsin-Madison. "This is a dream team that combines Rice's traditional strengths in programming language research with our new capabilities in big-data analytics," says Rice professor Vivek Sarkar, the project's principal investigator. PLINY is part of DARPA's Mining and Understanding Software Enclaves program, an initiative that seeks to gather hundreds of billions of lines of publicly available open source computer code and mine it to generate a searchable database of properties, behaviors, and vulnerabilities. "We envision a system where the programmer writes a few of lines of code, hits a button, and the rest of the code appears," says Rice professor Swarat Chaudhuri. The PLINY system will be based on a data-mining engine that continuously scans the massive repository of open source code. "Much like today's spell-correction algorithms, it will deliver the most probable solution first, but programmers will be able to cycle through possible solutions if the first answer is incorrect," says Rice professor Chris Jermaine.


What Georgia Tech's Online Degree in Computer Science Means for Low-Cost Programs
The Chronicle of Higher Education (11/06/14) Steve Kolowich

The online master's program in computer science offered by the Georgia Institute of Technology (Georgia Tech) has been held up as an example of successfully using the techniques of massive open online courses (MOOCs) to create low-cost, highly accessible education. The program consists of MOOC-like video courses and assessments paired with human course assistants who work directly with students. It was a nearly instant success and now enrolls as many people as Georgia Tech's traditional program, but at a fraction of the price: $7,000 for the three-year program. To better understand the success of the program, researchers at Harvard University and George Tech examined the students enrolled in the program to get a better idea of who they were. They found the students were mostly older, working American men. The average age of online students was 35, compared to 24 for the traditional course, and they are much more likely to report they are working rather than studying full time. Eighty percent are from the U.S., compared to the traditional program in which 75 percent are foreign students, largely from India and China. The online course also was highly male-dominated, with only 14 percent female students, compared to 25 percent in the traditional program. The students also were high achievers, with an average GPA of 3.58.
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Can an Algorithm Tell Us Who Influenced an Artist?
The Washington Post (11/09/14) Mohana Ravindranath

Rutgers University researchers are training a computer to analyze thousands of paintings to understand which artists influenced others. The software scans digital images of paintings looking for common features, such as composition, color, line, and objects shown in the piece. The software identifies paintings that share visual elements, suggesting the earlier painting's artist influenced the later one's. The software found some connections art historians had not, according to Rutgers professor Ahmed Elgammal. "The advantage is it can easily mine thousands and millions of art works in a very [efficient] way," Elgammal says. Although detecting similarities between paintings can help art historians discover possible influences, the software cannot definitively establish a connection between two artists. "Our final goal is not to get a final answer," Elgammal says. Instead, he says it will "be a tool to art historians, so it can help them do their job." The project is part of a broader effort at Rutgers to apply computer science techniques to the humanities. The art program is one of the first projects of Rutgers' Digital Humanities Lab, which it established this year in its Computational Biomedicine Imaging and Modeling Center.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Diagnostic Exhalations
MIT News (11/06/14) Larry Hardesty

Massachusetts Institute of Technology researchers say they have developed an algorithm that can accurately determine whether a patient is suffering from emphysema or heart failure based on readings from a capnograph (a machine that measures the concentration of carbon dioxide in a patient’s exhalations). The researchers first identified features of the capnographic signal that appeared to vary between populations. For example, the crests of the waves in healthy subjects' capnograms seemed to plateau at a maximum concentration, while those in sick patients did not. After identifying about a dozen features, the researchers developed a machine-learning algorithm that would look for patterns in the features that seemed to correlate with patients' ultimate diagnoses. However, the algorithm is somewhat unconventional in that the training data was split into 50 subsets. Each subset consisted of a random selection of about 70 percent of the data, meaning there was significant overlap between subsets, but no two subsets were identical. The researchers then used those subsets to train 50 different classifiers. The algorithm's ultimate output was the result of a vote by the 50 classifiers. During testing, the researchers found their algorithm for distinguishing healthy subjects from those with emphysema yielded an area under the curve of 0.98. The algorithm that distinguished emphysema patients from those with congestive heart failure scored 0.89.


GTRI's Dynamic Graph Analytics Tackle Social Media and Other Big Data
Georgia Tech Research Institute (11/07/14) John Toon

Researchers at the Georgia Institute of Technology Research Institute (GTRI) are developing technology designed to help investigate social networks, surveillance intelligence, computer-network functionality, and industrial-control systems. "Our first task is to look at the interesting properties of a graph--to find the important questions we can ask of that graph," says GTRI researcher Dan Campbell. "The second task is to find the answers as quickly as possible, and then put them to practical use." The researchers utilized STINGER, a graph-analysis framework built specifically to tackle dynamic, ever-changing applications such as social networks and Internet traffic. STINGER helps support GTRI's concentration on streaming or dynamic-graph technology, which can store very large databases and then update them in real time as new data come in. "Unlike traditional graph databases, STINGER's streaming-graph technology lets us store very big graphs and analyze them at high speed using fairly modest computing capability," says GTRI's Jason Poovey. "In half a terabyte of main memory--a pretty reasonable size today--we can handle billions of nodes and edges. Our benchmark tests show we can represent, update, and analyze a graph in real time that's essentially the size of all the data in Twitter."


Cockroach Cyborgs Use Microphones to Detect, Trace Sounds
NCSU News (11/06/14) Matt Shipman

North Carolina State University (NCSU) researchers say they have developed technology that enables robotic cockroaches, or biobots, to pick up sounds with small microphones and seek out the source of the sound. The technology is designed to help emergency personnel find and rescue survivors following a disaster. The biobots are equipped with one of two kinds of electronic backpacks that control the cockroach's movements. One type of biobot has a single microphone that can capture relatively high-resolution sounds from any direction to be wirelessly transmitted to first responders, and the second type is equipped with three-directional microphones to detect the direction of the sound. The researchers also developed algorithms that analyze the sound from the microphone array to localize the source of the sound and steer the biobot in that direction. "The goal is to use the biobots with high-resolution microphones to differentiate between sounds that matter--like people calling for help--from sounds that don't matter--like a leaking pipe," says NCSU professor Alper Bozkurt. The researchers also demonstrated technology that creates an invisible fence for keeping biobots in a defined area, which can be used to keep biobots at a disaster site, and to keep them within range of each other so they can be used as a reliable mobile wireless network.


Testing the 'Safety Alarm 2.0'
SINTEF (11/06/14)

Researchers at the Norwegian University of Science and Technology and SINTEF have developed a belt-based system designed for the elderly that triggers an alarm if the user falls down. The system, now undergoing testing, consists of a mobile phone connected to a hip belt that can be programmed and connected to a fall algorithm. In the future, the researchers hope to refine the system so it is more practical and functional than a mobile phone sewn into a belt. "Now we can test how our fall calculations relate to the real world, as opposed to the data set used today, which is based on fall simulations," says SINTEF researcher Yngve Dahl. "Thus, a key part of this test is collecting realistic data on what happens when an elderly person falls in the real world. This work is being undertaken in close collaboration with the Institute of Movement Science at the Norwegian University of Science and Technology." The testing data will be used to fine-tune the algorithms, which also involves measuring movement and changes in speed. Dahl says the most important aspect of the research is to develop systems that not only work from a technical perspective, but which elderly people also are willing to use in practice.


Can Quantum Speed Code-Breaking Tech?
Government Computer News (11/05/14)

The U.S. Commerce Department, the U.S. National Institute of Standards and Technology (NIST), and the University of Maryland (UMD) recently announced the creation of the Joint Center for Quantum Information and Computer Science (QuICS). The center will act a "venue for groundbreaking basic research to build our capacity for quantum research," says NIST acting director Willie May. The center's researchers will conduct basic research to understand how quantum systems can best be used to store, transport, and process information. QuICS researchers will focus on understanding how quantum mechanics informs computation and communication theories, and determining what insights computer science can shed on quantum computing. In addition, the researchers will examine the consequences of quantum information theory for fundamental physics, and develop practical applications for theoretical advances in quantum computation and communication. "The capabilities of today's embedded and high-performance computer architectures have limited advances in critical areas, such as modeling the physical world, improving sensors, and securing communications," say UMD professor Dianne O'Leary and NIST physicist Jacob Taylor, who will serve as co-chairs of QuICS.


Microsoft Headset to Help Blind People Navigate Cities
BBC News (11/05/14)

Microsoft is collaborating with a British charity to develop a headset that could help the blind and the visually impaired navigate urban locations. Jenny Cook, head of strategy and research for Guide Dogs, the group working with Microsoft on the project, says those with sight loss often do not leave the house with any frequency, in part because navigating unfamiliar areas independently can be daunting. The new headset is meant to remedy that by using information gathered from global-positioning systems and local beacons to determine where the user is and helping them navigate a given route with a blend of signals and direct audible commands such as "turn right." The headset is adapted from an existing model that is designed for cyclists, but is mounted in front of the user, rather than over the ear, so it does not drown out the surrounding sounds. Of eight visually-impaired people who tested the headset, six said wearing it made them feel more confident while navigating an unfamiliar area. One testing participant said it could help the visually impaired to go out even when they haven't made arrangements for transportation or accompaniment.


Abstract News © Copyright 2014 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe