Welcome to the March 4, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.
Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
'FREAK' Flaw Undermines Security for Apple and Google Users, Researchers Discover
The Washington Post (03/03/15) Craig Timberg
Companies and government agencies are scrambling to correct a major security flaw revealed this week that has left users of Apple and Google devices and users of million of websites vulnerable to man-in-the-middle attacks for more than a decade. Dubbed FREAK, the vulnerability is the result of 1990s-era government policy that restricted the export of strong encryption techniques, which resulted in what is now considerably weak 512-bit encryption being coded into numerous software products that have since proliferated around the world. The flaw was discovered by French computer science lab INRIA during tests of encryption systems and took everyone by surprise as 512-bit encryption has been considered obsolete for more than a decade. University of Pennsylvania cryptographer Nadia Heninger was able to crack the vulnerable encryption in about seven hours by renting time on Amazon Web Services servers. Hackers could exploit this method to steal passwords and personal information and potentially launch broader attacks on affected websites. The University of Michigan estimates almost a third of all "secure" websites are affected by FREAK, with about 5 million encrypted websites still vulnerable as of Tuesday morning. Governments and businesses were working behind the scenes to address FREAK before it became public knowledge on Monday, and both Apple and Google are working on patches for computers and mobile devices.
Dozens of Tech, Education, and Nonprofit Execs Urge Passage of Washington Computer Science Bill
GeekWire (03/04/15) Frank Catalano
More than 50 business and education leaders have signed a strongly worded appeal to the Washington state House of Representatives, urging them to vote for a bill that would expand computer science education in the state's schools. The letter was sent by Code.org and Washington STEM on March 3 and asks state legislators to support House Bill 1813. The bill would establish a grant program with matching private funds that would both train educators in computer science, as well as provide funds for new equipment. H.B. 1813 already has cleared the House committees on education and appropriations and will go to a vote before the whole House soon. The letter points out that although there are about 20,000 open computer jobs available in Washington and such positions are growing at a much faster rate than the state average, only 1,200 state students graduated with degrees in computer science in 2014 and computer science courses are only offered in 7 percent of the state's high schools. Among the 53 signers of the letter are former Microsoft CEO Steve Ballmer, Starbucks president Kevin Johnson, and University of Washington Provost Ana Mari Cauce, as well as Code.org and Washington STEM CEOs Hadi Partovi and Patrick D'Amelio.
Google, Stanford Use Machine Learning on 37.8M Data Points for Drug Discovery
CIO Australia (03/03/15) Rebecca Merrett
Researchers at Stanford University and Google have used machine-learning techniques, including deep learning and multitask networks, to find effective drug treatments for a variety of diseases. The researchers worked with 259 publicly available datasets on biological processes, containing 37.8 million data points for 1.6 million compounds. "Because of our large scale, we were able to carefully probe the sensitivity of these models to a variety of changes in model structure and input data," the researchers wrote on the Google Research Blog. The goal was to quantify how the amount and diversity of screening data from a variety of diseases with very different biological processes can be used to improve the virtual drug-screening predictions. "Our models are able to utilize data from many different experiments to increase prediction accuracy across many diseases," the researchers wrote. The learning models were evaluated using "area under the receiver operating characteristic curve," a measure for classification accuracy. The researchers noted a key finding of their work was that multitask networks allow for significantly more accurate predictions than single-task methods, and their predictive capability improves as more tasks and data is added to the models.
Mobile Phone App to Identify Premature Babies in the Developing World
University of Nottingham (United Kingdom) (03/03/15) Emma Thorne
University of Nottingham researchers, with support from the Bill & Melinda Gates Foundation, are developing a mobile app that will identify babies born prematurely in the developing world. The technology will rely on distinctive features on the feet, face, and ears of newborns to more accurately estimate the gestation of babies and to identify those who may need urgent medical care. "This could be a potentially transformative technology for the developing world where the majority of women do not benefit from specialist antenatal services during pregnancy and higher risks of infection and illness means premature births are commonplace," says Nottingham professor Don Sharkey. The project was one of 60 bids chosen from 1,700 applications worldwide for funding from the Gates Foundation's Grand Challenges Explorations Grant program. The app combines simple measurements with aspects of the Ballard test, which is used by healthcare professionals to estimate gestation and examines developmental characteristics. The app will use the mobile phone's camera to take images of the foot, face, and ears of the baby and upload them to a database where they will be compared to pictures of other babies at various known gestational ages. The researchers also plan to study the potential ethnic differences in gestational development between babies born in different parts of the world.
Why Computers Still Struggle to Tell the Time
PC World (03/02/15) Joab Jackson
Despite the extreme precision of most modern computer systems, software engineer George Neville-Neil says it remains surprisingly difficult for them to accurately tell time. Speaking at ACM's Applicative conference in New York City last week, Neville-Neil said the problem stems largely from the hardware most computers use to tell time: often inexpensive crystal oscillators that lose precision over time. He says the average computer, smartphone, or server is able to tell time as accurately as a mechanical pocket watch, which is adequate for the average user but not precise enough for many fields. Energy and telecom companies need nanosecond-level precision, as do cloud-service providers, high-frequency traders, and many robotic systems. The current solution is the use of protocols that regularly query more accurate timekeepers, such as the U.S. Naval Observatory or the National Institute of Standards and Technology. The most commonly used protocol, Network Time Protocol, queries a master timekeeper once every 15 to 64 seconds and uses the answer to synchronize timekeeping across a network. However, even this process can prove to be insufficient, with many cloud providers struggling to accurately keep time. Neville-Neil is developing a more accurate protocol, Precision Time Protocol, which largely relies on querying the master timekeeper more frequently, although this requires more bandwidth.
Army Opts for Openness With New Computer Security Tool
Baltimore Sun (03/01/15) Ian Duncan
Researchers at the U.S. Army Research Lab in December released a new network threat visualization tool on open source website GitHub in the latest and most significant step in a growing movement toward open source among military and government developers. "The Army is open and willing to collaborate," says project leader William Glodek. "Hopefully, we can attract some bright talent to contribute to the project." Even as skepticism about the government and its lack of transparency has grown in the tech sector in recent years, coders in the military and intelligence communities increasingly are pursuing open source initiatives and putting their code online for others to tweak and improve. However, the Glodek team's decision to post its code on GitHub was noteworthy because it was one of the largest projects to be put on such a mainstream venue to date. Dan Guido, founder of security firm Trail of Bits, called the move "really amazing." GitHub's Ben Balter says he hopes the move encourages other military and government agencies to put their code on the website. Despite growing interest in open source among government coders, Balter says there is still significant institutional and bureaucratic inertia that has to be overcome before the government fully embraces open source.
Cockroach Robots? Not Nightmare Fantasy but Science Lab Reality
The Guardian (03/03/15) Ian Sample
Texas A&M University researchers have fused a computer onto the back of a live cockroach in order to control the insect. At the push of a button, wires connected to the cockroach's nervous system control which way it moves. The researchers made tiny backpacks containing a computer chip that sends signals down a pair of wires into nerves that control legs on either side of the cockroach. The researchers demonstrated how they could remotely control the direction in which the cockroach walked by stimulating nerves on either side of its body. Testing showed that when the robotic cockroaches were held on leashes, the insects could be controlled about 70 percent of the time. However, when the insects were allowed to roam free, the remote control worked only about 60 percent of the time. The researchers now are studying how to make the cockroaches respond to directions more reliably. "Insects can do things a robot cannot. They can go into small places, sense the environment, and if there's movement, from a predator say, they can escape much better than a system designed by a human," says Hong Liang, who led the research. "We wanted to find ways to work with them."
Google Wants to Rank Websites Based on Facts Not Links
New Scientist (02/28/15) Hal Hodson
Google's search engine currently uses the number of incoming links to a Web page to determine where it appears in search results. However, Google researchers are experimenting with a new system to rank pages based on their trustworthiness instead of on their reputation across the Web. The new ranking system counts the number of incorrect facts within a page, and not the number of incoming links. "A source that has few false facts is considered to be trustworthy," the researchers say. The new method produces a score for each page known as its Knowledge-Based Trust score. The software, which is not yet live, works by accessing the Knowledge Vault, a vast database of facts Google has collected from the Internet. The facts are unanimously agreed upon and are considered a reasonable measure for truth. Web pages that contain contradictory information are moved down in the rankings, while facts the Web is in unanimous agreement on are designated a reasonable proxy for truth.
Panoramas for Your Tablet
Panoramic video could soon pop up on the screens of smart TVs, smartphones, and tablets as researchers in Germany are looking to use the technology that inspired the "Star Trek" holodeck to recreate a similar effect in the real world. Christian Weissig from the Fraunhofer Institute for Telecommunications' Heinrich-Hertz-Institut (HHI) and his team have developed Ultra-HD-Zoom, a prototype that enables users to select and navigate around high-resolution segments of panoramic images. HHI's OmniCam system also is capable of creating 360-degree panoramic images in real time, enabling the technology to be used to cover live events. Using currently available LTE networks, it is possible to transmit individual segments of the panorama. The approach makes it technically feasible for a very large group of people to use a panoramic image at the same time. "It's another step towards personalized television: users taking advantage of the 'second screen' to become their own cameraman and take over the footage, maybe by zooming in to a specific point within their chosen segment," Weissig says.
NIH Dives Into Cyber-Physical Systems Research
Government Computer News (02/27/15) Mark Pomerleau
The U.S. National Institutes of Health (NIH) and several other agencies have announced funding and grant opportunities for cyber-physical systems (CPS), a new generation of embedded systems with integrated computational and physical capabilities. "The ability to interact with and expand the capabilities of the physical world through computation, communication, and control is a key enabler for future technology developments," according to a recent IEEE paper. NIH wants to study ways CPS technology can mitigate errors in intensive care units, exploring the development of CPS for artificial organs, as well as developing hospital-wide applications to decrease fragmentation and conserve costs by tracking medical assets. The funding program aims to develop the core system science needed to engineer complex cyber-physical systems, and to foster a research community committed to advancing research and education in CPS and transitioning CPS science and technology into engineering practices. "CPS technology will transform the way people interact with engineered systems--just as the Internet has transformed the way people interact with information," NIH says. Grant applicants should describe how the ideas being proposed will address healthcare needs of end-users, including healthy individuals, patient populations with specific targeted diseases, persons with disability, and or health disparity populations.
New App Monitors Net Neutrality in Mobile Networks
[email protected] (03/02/15) Jason Kornwitz
Northeastern University researchers looking to improve the transparency of mobile systems have developed an app for detecting traffic differentiation in mobile networks. The release of the Differentiation Detector comes on the heels of the recent U.S. Federal Communications Commission vote to pass new net neutrality rules. The app will enable users to test whether Internet service providers (ISPs) are violating the law by blocking or throttling broadband access. Mobile system expert David Choffnes and colleagues say the app utilizes a virtual private network (VPN) proxy to record traffic generated by arbitrary applications, such as YouTube or Netflix, and replays the traffic both with and without the VPN in order to identity differentiation. The free Android app also will help consumers choose the best service provider. Over the next several months, the researchers plan to collect differentiation data from tens of thousands of users and develop a website to make their results public. "Giving users the chance to participate in revealing ISP practices will help us influence policy and help users make informed choices when selecting mobile providers," Choffnes says.
Disney's Computer-Assisted Authoring Tools Help to Create Complex Interactive Narratives
EurekAlert (02/27/15) Jennifer Liu
Disney researchers have developed a new design paradigm called interactive behavior trees (IBTs), a graphical modeling language that accommodates multiple story arcs. The researchers also have created authoring tools that can automatically detect and resolve narrative inconsistencies that arise as story arcs play out or when users interact in unexpected ways. "We want interactive narratives to be an immersive experience in which users can influence the action or even create a storyline, but the complexity of the authoring task has worked against our ambitions," says Rutgers University assistant professor Mubbasir Kapadia, who previously worked at Disney Research. "Our method of modeling multiple story arcs and resolving conflicts in the storylines makes it feasible to author interactive experiences that are free form, rather than constricted." IBTs have a hierarchical structure, which enables each story arc to be defined as its own subtree, while user interactions are monitored independently, as are those interactions that trigger new story arcs. "With this structure, increased user interaction does not make the author's task more complex, so we can now imagine ways of giving the user more freedom to interact freely with the virtual world," Kapadia says.
'Slow Motion at the Speed of Light'
UA News (AZ) (02/27/15) Daniel Stolte
Researchers at the University of Arizona (UA) and the University of California, Los Angeles (UCLA) have developed technology that provides real-time monitoring of streaming video to optimize network traffic. The researchers say their system has achieved real-time data acquisition and processing at a record 1.2 terabits per second, which is about 10 times faster than conventional technology. The time-stretch accelerated processor utilizes photonic time-stretch enhanced recorder technology to create an optical slow motion to slow down the fast data so it can be digitized and processed. "The system takes in the data as it's coming in at high speed and slows it down while the information still is encoded in the form of laser light," says UCLA researcher Bahram Jalali. The researchers demonstrated in-service optical performance monitoring of 10 gigabit per second streaming video packets transmitted through a commercial networking platform. "This is a very important achievement by our [Center for Integrated Access Network] research team, as it is the first demonstration of real-time optoelectronics performance network monitoring of high-bandwidth streaming video," says UA researcher Nasser Peyghambarian.
Abstract News © Copyright 2015 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.