Association for Computing Machinery
Welcome to the August 26, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


XSEDE 2.0 Reorganizes Into 5 Goal-Driven Areas
HPC Wire (08/24/16) John Russell

With a new five-year charter and a $110-million grant from the U.S. National Science Foundation announced this week, the XSEDE project, which provides advanced cyberinfrastructure resources and services to the nation's scientists and engineers, is planning to restructure into five areas of concentration. XSEDE executive director John Towns says the reorganization "will provide a more agile and responsive program designed to accelerate progress toward the strategic goals." The Resource Allocation Service will maintain management of receiving, assessing, and awarding proposals for computational resources. It will serve as a neutral arbiter in allocating resources from the service-provider ecosystem to the research community. Another focus area is the revised Community Infrastructure service, which will identify, evaluate, and open up new software capabilities, while Community Engagement & Enrichment intends to engage a new generation of computational researchers, and help connect them to local and national resources. Meanwhile, the Extended Collaborative Support Service is designed to maximize the efficacy of advanced computing infrastructure via computational experts who will directly participate in research teams. The fifth unit, XSEDE Operations, will maintain and mature an integrated advanced computing infrastructure on a national scale.


Self-Driving Cars Reach a Fork in the Road, and Automakers Take Different Routes
The Washington Post (08/24/16) Ashley Halsey III; Michael Laris

Automakers are taking divergent approaches to their development and deployment of driverless vehicles, with Ford supporting Google's plan to transition directly to a fully autonomous car without traditional controls, while other major manufacturers push for more incremental adoption. The latter course is the more prudent one for Carnegie Mellon University professor Raj Rajkumar, who says human drivers' tendency toward distraction and a lack of maturity in the automated systems pose too great a danger. Audi's Brad Stertz agrees, noting, "we don't think it's wise to throw drivers into an environment they don't completely understand or trust. That just invites misuse." Stertz says Audi plans to initially keep steering wheels in its semi-automated vehicles, while a "driver-availability system" will monitor motorists for signs of distraction. Audi's cars also will only work hands-free on controlled-access highways, handing control to the driver when the vehicle's speed reaches 35 m.p.h. Meanwhile, General Motors' Kevin Kelly says the company is looking into an on-demand, autonomous ride-sharing network in which cars will have steering wheels, acceleration and brake pedals, and a safety driver or pilot, to make consumers more comfortable. However, there is disagreement as to how comfortable consumers are with the concept of driverless vehicles, with some surveys noting consumers with concerns about such technologies tended to skew older.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Researchers Find Vulnerabilities in iPhone, iPad Operating System
NCSU News (08/25/16) William Enck; Matt Shipman

An international team of researchers led by North Carolina State University (NCSU) has identified security vulnerabilities in Apple's iOS operating system. "Our goal was to identify any potential problems before they became real-world problems," says NCSU professor William Enck. The researchers focused on iOS' "sandbox," which serves as the interface between applications and the operating system. Enck says the sandbox uses a set "profile" for every third-party app, which controls the information the app has access to and governs which actions the app can execute. The researchers extracted the compiled binary code of the sandbox profile in order to see whether it contained vulnerabilities that could be exploited by third-party apps. They then decompiled the code so it could be read by humans, used the decompiled code to make a model of the profile, and ran a series of automated tests in that model to identify potential vulnerabilities. The researchers found vulnerabilities that would enable them to launch different types of attacks via third-party apps. "We are already discussing these vulnerabilities with Apple," Enck says. "They're working on fixing the security flaws, and on policing any apps that might try to take advantage of them." The researchers will present their work at the ACM Conference on Computer and Communications Security (CCS 2016), to be held Oct. 24-28 in Vienna, Austria.


Programmable Network Routers
MIT News (08/23/16) Larry Hardesty

Researchers at the Massachusetts Institute of Technology's (MIT) Computer Science and Artificial Intelligence Laboratory and five other organizations have designed programmable routers that can keep pace with the speeds of modern data networks, as detailed in two papers being presented this week at the ACM Special Interest Group on Data Communication (SIGCOMM 2016) conference in Brazil. The researchers were determined to find a set of simple computing elements that could be configured to deploy diverse traffic management schemes, without impacting the operating speeds of modern routers and without taking up too much space on the chip. They built a compiler used to compile seven experimental traffic-management algorithms onto their proposed circuit elements to test their designs, and the specifications for seven circuit types derived from those tests is included in one of their papers. The second paper describes the design of their scheduler. "Previously, programmability was achievable, but nobody would use it in production, because it was a factor of 10 or even 100 slower," says MIT professor Hari Balakrishnan. "You need to have the ability for researchers and engineers to try out thousands of ideas. With this platform, you become constrained not by hardware or technological limitations, but by your creativity."


Computers Can Sense Sarcasm? Yeah, Right
Scientific American (08/26/16) Jesse Emspak

Researchers at the University of Turin and Yahoo! have developed software that can identify the expression of sarcasm on social media and the Web. Turin professor Rossano Schifanella says the project began with a crowdsourcing program asking people from several English-speaking countries to tag social media posts as sardonic or not. They first evaluated text-only statements, then statements with images. In most cases, the presence of a visual image helped identify a sarcastic message, while linguistic cues that expressed sarcasm to the participants included wordplay as well as punctuation. The next step was to write an algorithm that mathematically represented the knowledge gained from the crowdsourcing program. Schifanella says this enabled a machine to tap that baseline data to examine new posts and decide whether they were sarcastic. The computer picked up on the sarcasm 80 percent to 89 percent of the time, and the results sometimes varied according to the platform and the type of features used to detect the sarcasm. Grammarly research director Joel Tetreault, formerly with Yahoo!, says advances in computer-processing power and large social networks have facilitated this type of neural network-based machine learning. Other researchers say this research constitutes a key step toward computerized natural-language comprehension.


Facebook Is Giving Away the Software It Uses to Understand Objects in Photos
The Verge (08/25/16) Nick Statt

Facebook has open sourced its DeepMask, SharpMask, and MultiPathNet computer-vision software tools, which work together to identify both the variety and the shape of objects within photos. Facebook can teach algorithms to conduct traditional human cognitive tasks via machine learning. The highly experimental DeepMask and SharpMask tools concentrate on segmentation, or the process in which a neural net identifies objects, according to the Facebook Artificial Intelligence Research (FAIR) team. DeepMask asks the computer a series of yes/no questions about an image as it tries to classify its contents, while SharpMask is designed to refine object selection for better accuracy. "DeepMask knows nothing about specific object types, so while it can delineate both a dog and a sheep, it can't tell them apart," notes FAIR researcher Piotr Dollar. The addition of MultiPathNet and foundational object-recognition methods developed by FAIR's Ross Girshick to the collection of tools can distinguish and categorize objects. Dollar says the technology could be of service to disabled users, enabling them "to 'see' a photo by swiping their finger across an image and having the system describe the content they're touching." Further out, Dollar says FAIR researchers envision addressing the challenge of identifying scenes, objects, and actions in video over space and time.


Computer Science Team to Solve Large-Scale Network Problems
Purdue University News (08/25/16) Alex Pothen

Purdue University researchers are working to solve problems associated with massive networks with billions of nodes and links. Intel provided the researchers with two years of funding to design new algorithms and software for massive networks. "The good news about this gift from Intel is not only the recurring funding, which will be very helpful, but we will have access to the new processors that Intel develops every year and its supercomputers," says Purdue professor Alex Pothen. He says current state-of-the-art algorithms can take several days to process, and the researchers will work on solutions to decrease the time to a few hours or less. "We will work on matching and edge cover problems in algorithmic computer science, and they have applications in many fields such as network science, computational science and engineering, and data science," Pothen says. He notes the research can be applied to a range of fields. For example, in the medical field, the algorithms could match medical students with hospitals so the students get their best possible choices. Pothen says similar problems arise in matching organ donors to recipients, and in matching advertisers to search results on the Web.


Using Data to Better Understand Climate Change
National Science Foundation (08/23/16) Aaron Dubrow

A University of Minnesota-led research team is using data-driven approaches to better understand the environmental and social impacts of climate change. The U.S. National Science Foundation in 2010 awarded a $10-million Expeditions in Computing grant to the research team to address key challenges in climate change science. As part of a project, the team developed methods that use climate and ecosystem data from a range of sources to refine predictions and identify changes in the climate. For example, the researchers built a system to monitor the dynamics of global surface water bodies using data from the U.S. National Aeronautics and Space Administration's Earth observation satellites. The system was able to identify a range of hydrological changes, from the meanderings of rivers to the reduction and growth of water bodies due to droughts, melting glaciers, and dam construction. "These innovative approaches are helping to provide a new understanding of the complex nature of the Earth system and the mechanisms contributing to the adverse consequences of climate change," says University of Minnesota professor Vipin Kumar, who received the ACM SIGKDD Innovation Award in 2012. Kumar discussed some of the team's machine learning and data mining advances during a keynote speech at the 2016 ACM SIGIR Conference on Research and Development in Information Retrieval in Italy.


New Project Helps K-12 Students Become Fluent With Data and Technology
CMU News (08/17/16) Byron Spice

The Carnegie Mellon University (CMU) Robotics Institute's Community Robotics, Education, and Technology Empowerment (CREATE) Lab aims to help students in grades 1-12 improve their ability to think critically and creatively manipulate technology, media, and data. "Our vision is that students will be using technology, multimedia, and data as raw materials for supporting their decisions or expressing their creativity," says CREATE Lab project manager Jessica Pachuta. "While schools have concentrated on technical and data literacy, we want students to achieve fluency." Twelve teachers from eight Pittsburgh-area school systems examined how to apply the concept of data and technology fluency in their schools during workshops this summer with CMU researchers. The teachers will develop tools and methods that enable students to use data and technology for asking questions and exploring their environment, for telling cohesive stories, and for articulating opinions and arguments. The pilot program will be implemented the following school year, with West Liberty University serving as a thought partner. The CREATE Lab will provide access to visualization software for understanding large datasets, to virtual reality tools, and to other technologies and expertise.


Google Is Using AI to Compress Photos, Just Like on HBO's Silicon Valley
Quartz (08/23/16) Dave Gershgorn

Researchers at Google are using neural networks to make picture files smaller without sacrificing quality. The team is teaching neural networks how to scrimp and save data by looking at examples of how standard compression works in random images from the Internet. The researchers' paper shows neural networks can beat standard JPEG compression on standard tests. The network is trained by breaking 6 million randomly selected, previously compressed photos into tiny 32×32-pixel pieces, and then selecting 100 pieces with the least effective compression from which to learn. Effectiveness is gauged by the pieces that retain their size the most when compressed into a PNG file—they resist compression. By training with tougher problems, researchers theorize the neural nets would be more prepared to take on the easy patches. The network can predict how the image would look after it would be compressed, and then generate the image. The team says the neural networks decide the best way to variably compress separate patches of a given photo and how those patches fit together, rather than treat the whole image as one big piece. The method is not limited by the size of the file.


Chaos Could Provide the Key to Enhanced Wireless Communications
Phys.org (08/23/16)

Researchers at China's Xian University of Technology and Britain's University of Aberdeen have shown that adding chaos to wireless communication can augment its reliability, efficiency, and security. The rapid transmission of information at a low error rate is thwarted by the physical restrictions of wireless physical media. "We showed that the information transmitted over a wireless channel in a chaotic signal is unaltered even though the received chaotic communication signal is severely distorted by the wireless channel constraints," says Xian University's H.P. Ren. "We also demonstrated that it can be decoded to provide an efficient framework for the modern communication systems." The extreme dependence of chaos on initial conditions entails that when controlling the circuit to generate an encoding wave signal, even very small errors in the control instrumentation drive the circuit to states in which the signal fails to decode the information for transmission. The researchers demonstrated how to control the circuit via an optimal set of perturbations that minimally disrupt the natural dynamics of the circuit, but produce the desired encoding signal. The team also found the chaotic signal they used as a platform for their communication system can energy-efficiently encode any binary source of information.


World's Most Efficient AES Crypto Processing Technology for IoT Devices Developed
Tohoku University (08/22/16)

Researchers at Tohoku University say they have developed a technique for compressing the computations of encryption and decryption operations, or Galois field arithmetic operations, and created the world's most efficient Advanced Encryption Standard (AES) cryptographic processing circuit. The researchers say the breakthrough makes it possible to include encryption technology in information and communication technology devices that have tight energy constraints, greatly enhancing the safety of next-generation Internet of Things (IoT) devices. The technique involves representing the AES encryption algorithm as a calculation based on a Galois field. The researchers transformed the input numerical representation into a different one that can perform multiple computations in one attempt. It also significantly reduces the number of required circuit elements, and the original output can be easily recovered by an inverse transformation. In addition, the researchers say they developed a computational method for inserting the transformed numerical representation and the inverse transformation before and after the computations, and for carrying out the computations internally using the transformed numerical representation. Moreover, the researchers designed and developed an AES cryptographic processing circuit based on the new method and have confirmed a single cycle can be carried out with about 45 percent of the energy of the world's best circuit.


How Cooperative Behavior Could Make Artificial Intelligence More Human
The Conversation (08/25/16) Roger Whitaker

Computer science can play an important role in determining the conditions in nature that give rise to the form of cooperation known as indirect reciprocity, or donation to others, writes Cardiff University professor Roger Whitaker. "Using software, we can simulate simplified groups of humans in which individuals choose to help each other with different donation strategies," he says. "This allows us to study the evolution of donation behavior by creating subsequent generations of the simplified group. Evolution can be observed by allowing the more successful donation strategies to have a greater chance of existing in the next generation of the group." Whitaker says such insights will be crucial for imbuing intelligent and autonomous devices with cooperative decision-making for when they engage with each other or with humans. "This can allow the development of intelligence which can help autonomous technology decide how generous to be in any given situation," he notes. Whitaker says Cardiff researchers ran computer-simulated "donation games" between randomly paired virtual players to work out cooperation's evolution in social groups, based on players making self-comparisons of reputation. The simulations' outcomes demonstrated evolutionary favoritism for donating to those who are at least as reputable as oneself.


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]

Unsubscribe