Association for Computing Machinery
Welcome to the August 24, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


Cybersecurity Boot Camp Draws Congressional Staffers to Stanford
Stanford Report (08/24/15) Steve Fyffe

A bipartisan group of Capitol Hill staffers last week attended Stanford University's second Congressional Cyber Boot Camp to gain insights on pressing cybersecurity threats and ways to mitigate them. FireEye president Kevin Mandia warned of the growing prevalence of state-sponsored cyberattacks, with China and Russia among the main culprits. Mandia characterized the shortcomings of the U.S. cybersecurity effort as "we've spent billions of dollars on defense, but I don't think we've raised the cost of offense a dollar." Hoover Institution research fellow Herb Lin said cyberattackers have a clear edge over defenders, in that pinpointing flaws in commonly used software is akin to searching for a needle in a haystack. "Each one of those lines of logic might have a flaw that can be exploited by an attacker to break in," noted Symantec's Carey Nachenberg. Meanwhile, University of California, San Diego professor Stefan Savage said there is overemphasis on laptops and servers as hackers' primary targets, when everyday objects such as cars are vulnerable because most processors "look nothing like computers." Center for International Security and Cooperation co-director Amy Zegart said she wants to expose congressional staffers to various technology, legal, technical, and policy specialists in order to expedite the learning process and enable cross-disciplinary collaboration.


Crash-Tolerant Data Storage
MIT News (08/24/15) Larry Hardesty

Massachusetts Institute of Technology (MIT) researchers will detail the first computer file system that mathematically ensures no loss of data when the system crashes at the ACM Symposium on Operating Systems Principles in October. MIT professors Nickolai Zeldovich, Frans Kaashoek, and Adam Chlipala, along with students Haogang Chen and Daniel Ziegler, used formal verification to establish the file system's reliability. Formal verification entails mathematically defining acceptable bounds of operation for a computer program and then proving the program will never surpass them. The MIT researchers' work is unique in that they demonstrate attributes of the file system's final code instead of a high-level schema, using a proof assistant called Coq to supply a formal language for describing elements of a computer system and the relationships between them. Chlipala notes they deploy the file system in the same language in which they are writing the proofs, which are checked against the actual file system. The next step was a formal description of the relationships between the behavior of the different system components under crash conditions. Although the file system was rewritten numerous times, Kaashoek says the researchers spent 90 percent of their time describing system aspects and the relationships between them and on the proof.


NSA Preps Quantum-Resistant Algorithms to Head Off Crypto-Apocalypse
Ars Technica (08/21/15) Dan Goodin

The U.S. National Security Agency last week updated the guidance it provides agencies and contractors in regards to the use of cryptography to warn about the possible effects quantum computers could have on modern cryptographic keys and algorithms. Quantum computers hold the potential to overturn contemporary cryptography by factoring keys almost instantaneously, but most experts believe researchers are a decade, if not decades, away from having a functioning quantum computer. However, NSA considers the threat imminent enough to start the shift toward what it calls quantum-resistant crypto. In its new guidance, NSA recommends agencies and contractors already using the cryptographic algorithms and key sizes currently recommended by the agency continue to do so, but advises those that have not yet adopted NSA-approved crypto to hold off until the new quantum-resistant crypto standards are ready. "For those partners and vendors that have not yet made the transition to Suite B algorithms, we recommend not making a significant expenditure to do so at this point, but instead to prepare for the upcoming quantum-resistant algorithm transition," NSA says in its guidance. The agency did not say how long it will take for quantum-resistant algorithms to be published, but if it follows the gradual process of previous cryptographic roll-outs, it could be decades before quantum-resistant crypto is widely implemented.


At Microsoft, Software-Defined Networking Takes Cloudy Turn
eWeek (08/20/15) Pedro Hernandez

Microsoft has thrown its support behind software-defined networking (SDN) and made it a major part of its presentation last week at ACM's SIGCOMM 2015 conference in London. Microsoft offered attendees a glimpse of how SDN is enabling cloud-based workloads at a scale that would overwhelm legacy network architecture. Microsoft believes SDN technologies will help meet the needs of enterprise customers and data center operators seeking more flexible, application-aware means of managing their networks, and has developed new SDN capacities, including a scalable network controller and a software load balancer. Microsoft already has embraced SDN at its own facilities: the technology is a major enabler of its Azure cloud services platform. One of the areas of SDN receiving a lot of attention from Microsoft is the open Switch Abstraction Interface (SAI) specification, which also is supported by Dell, Big Switch Networks, and Mellanox. In addition, Microsoft demonstrated its Azure Cloud Switch switching software, which runs atop SAI, at SIGCOMM. Microsoft also previewed its Azure SmartNIC technology, which uses field programmable gate arrays to enable reconfigurable network hardware.


Machine Learning Selects World's Next Top Models
iTnews Australia (08/20/15) Ry Crozier

Indiana University researchers have trained machine-learning algorithms to accurately predict the next batch of successful female fashion models. Success was measured in the number of runways the new models walked in a fashion season. The algorithms correctly predicted six out of eight fashion models who became popular during the season, using training data from the past season only. The algorithms also successfully identified six out of seven fashion models who did not perform in any top event, according to the researchers. The past season data was collected from several fashion databases that track the identity and attributes of the models, their agency representation, and in which shows they appear. The researchers also examined which of the "new faces" from the fashion databases had Instagram accounts and tracked their activity for any correlation between social media presence and runway success. The researchers found Instagram data does have an effect on new models' success, but it does not tell the whole story. "As the impact of social media--especially Instagram--becomes significant in the fashion industry, predictive methods have the potential to leverage collective attention and the wisdom of the broader user population, which reflect some of the popularity of fashion models, to predict their career success," the researchers note.


Team Designs Robots to Build Things in Messy, Unpredictable Situations
Technology Review (08/20/15) Julia Sklar

Researchers at Harvard University and the State University of New York at Buffalo are developing robots to function outside of ideal, predictable environments. Instead, the new robots will be able to work in places where predictive algorithms cannot be used to plan several thousands steps ahead. The researchers note these "builder bots" are designed to be disaster relief agents. One robot deposits expandable, self-hardening foam, while another drags and piles up sandbags. The materials are useful in a range of real-world environments, but they are highly unpredictable. The researchers dealt with this unpredictability by equipping the robots with an infrared sensor that takes scans and assesses the environment. The robots rely on an algorithm that functions as a loop to build as they go, accounting for any changes in the environment, as well as changes to the material. The researchers currently are focusing on using their adaptable robotic system for building ramps. Going forward, the researchers plan to develop robots that can build in situations in which they do not know what materials will be available.


You'd Never Know It Wasn't Bach (or Even Human)
Yale News (08/20/15) Eric Gershon; Jim Shelton

Yale University computer scientist Donya Quick is refining a computer program called Kulitta, which she developed to produce music that in tests fooled more than 200 human assessors into thinking it was created by a human. Kulitta can learn musical attributes from a body of existing compositions, and it can write music via a "top-down" approach that composes by eliminating elements it does not want to use. However, the program differentiates itself from other composition systems by being versatile, in that it can employ the structures of different musical forms and blend them to create music with distinctive novelty. Kulitta's core is a four-module process, with the first module establishing musical properties and the second generating an abstract musical structure. The third module produces musical chords and the fourth arranges everything into a specific framework. Quick says the program can create compositions in seconds that would take a human at least a day to compose. She envisions Kulitta and similar programs as "another tool in the toolbox" for musicians, and thinks they could help people without advanced musical skills engage in high-level composition. Although Yale professor Konrad Kaczmarek acknowledges technologies such as Kulitta will likely change how composers write, perform, and listen to music, he says the basic elements of composition will remain an unchanged, algorithmic process.


Cellphones Help Track Flu on Campus
Duke Today (08/18/15)

Wearable devices or smartphone applications could help identify college students who are at risk of catching the flu, according to researchers at Duke University and the University of North Carolina-Chapel Hill. The team used a mobile app to track the interactions of students, then developed a model that enabled them predict the spread of influenza from one person to the next over time. For 10 weeks during the 2013 flu season, students used Android smartphones with built-in software, iEpi, which used Wi-Fi, Bluetooth, and global-positioning systems technology to monitor where they went and who they came in contact with from moment to moment. The students also recorded their symptoms every week online. The model determined how likely each student was to spread or contract the flu on a given day. "Smartphones and wearable health and fitness devices allow us to collect information like a person's heart rate, blood pressure, social interactions, and activity levels with much more regularity and more accurately than was possible before," notes Duke researcher Katherine Heller. "You can keep a continuous logbook. We want to leverage that data to predict what people's individual risk factors are, and give them advice to help them reduce their chances of getting sick."


Computer Game for Chameleons Tests Their Telescopic Eyes
CNet (08/20/15) Michelle Starr

Testing whether chameleons' eyes can work together is the purpose of a computer game designed by Ehud Rivlin, a computer science professor at the Technion-Israel Institute of Technology, and Gadi Katzir, a biology professor at the University of Haifa. The game starts with a chameleon shown a virtual insect that moves across a screen, and the researchers learned the reptile first used one eye to focus on the insect while the other eye continued scanning elsewhere. But once the chameleon decided to catch the insect, it focused both eyes on it a second before lashing out with its tongue. Two insects moving in different directions were then displayed before the chameleon, which initially exhibited confusion, but followed the same eye-movement pattern once it had decided to concentrate on one insect to catch. "If the eyes were truly independent, one would not expect one eye to stay put and then have the other eye converge," Rivlin notes. "But we found that once the chameleon made its decision about which target to fire on, it swiveled the second eye around to focus on the same simulated fly the first eye was locked on." The researchers say this phenomenon suggests the chameleon's eye movements are disconjugate during scanning, conjugate during binocular tracking, and disconjugate but coordinated during monocular tracking.


Robots Discover How Cooperative Behavior Evolved in Insects
IEEE Spectrum (08/19/15) Evan Ackerman

Simulated robots were able to learn how to self-evolve into a system of specialized division of labor completely on their own, in a manner similar to social insects. The robots' mechanism, like that of insects, entails splitting large, complex tasks into smaller chunks in which individuals can specialize. The researchers modeled a scenario containing a sun to provide light, a nest, a tree, and a ramp with a slope of eight degrees to simulate a tree trunk. The virtual robots were programmed to collect leaves and avoid impending obstacles. Their behavioral repertoire was very basic, involving movement toward or away from light, random movement, picking up objects, and dropping objects. The behaviors initially were mixed up randomly into "genes" and allocated to groups of four virtual robots. One hundred of the robot groups were simultaneously unleashed on the virtual tree for 5,000 seconds, and each group was assessed three times. Individual and group performance dictated which genes underwent crossbreeding and mutation to produce the next robot generation, and this process was repeated 2,000 times. A majority of controller robots evolved from generalists to specialists, with selection driven mainly by the slope.


Algorithm Interprets Breathing Difficulties to Aid in Medical Care
NCSU News (08/19/15) Matt Shipman

North Carolina State University (NCSU) researchers, working as part of a larger project to develop wearable smart medical sensors, say they have developed an efficient algorithm that can interpret patients' breathing patterns to give medical providers information about what is happening in their lungs. "Now we've developed an algorithm that can assess the onset time, pitch, and magnitude (or volume) of wheezing sounds to give healthcare professionals information about the condition of the lungs," which can be used to help doctors make more informed decisions about diagnosis and treatment, says NCSU professor Hamid Krim. The algorithm was developed to work with wearable technology, and the researchers hope it can ultimately be used to continuously assess the sound of a patient's breathing over time. A potential future system would have sensors that monitor breathing transmit information to a smart device. The data is fed into the algorithm, which determines if there is a breathing problem. The smart device then can notify the patient and their medical provider. "We're currently weighing whether to modify the sensors so that they can run the algorithm and transmit only if there is a problem, or to maintain the current approach of having the sensor transmit all of the data so that the smart device runs the algorithm," Krim says.


New Internet Routing Method Allows Users to Avoid Sending Data Through Undesired Geographic Regions
University of Maryland (08/18/15) Melissa Brachfeld

University of Maryland researchers have developed Alibi Routing, a method for providing concrete proof to Internet users that their information did not cross through certain geographic areas. The system is ready to deploy and does not require modifications to the Internet's routing hardware or policies. The researchers evaluated the Alibi Routing method by simulating a network with 20,000 participants and selected countries known as "Enemies of the Internet," including China, Syria, North Korea, and Saudi Arabia, as well as Japan and the U.S. because those countries had the highest number of Internet users at the time. Alibi Routing works by searching a peer-to-peer network to locate other users running the software that can relay the initial user's packets to their ultimate destination while avoiding specified forbidden regions. The peer, dubbed an alibi, provides proof that at a particular time, a packet was at a specific geographic location far enough away from the forbidden areas that data could not have entered them. The success rate for Alibi Routing depends on how close the source and destination are to the forbidden region and how central the forbidden region is to Internet routing. The simulation showed the system successfully found an alibi more than 85 percent of the time, and when a small safety parameter was implemented, the success rate rose to 95 percent.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe