Welcome to the September 14, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.
Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
Machine Learning Techniques Aim to Reduce Traffic
Engineering.com (09/13/16) Michael Alba
Researchers from Tsinghua University in China performed a study on how traffic signaling can be optimized using deep reinforcement learning, with implications for reducing traffic congestion. Improving traffic efficiency is problematic due to the challenging tasks of producing a useful traffic flow model and then optimizing it. The Tsinghua team accomplished the first task with a simplified model of an eight-lane intersection that had only red and green lights and only permitted straight-through traffic. They then deployed reinforcement learning algorithms to determine signaling actions that yielded the most systematic benefits, and they assessed them by quantifying the queuing length of traffic in both directions. The algorithms attempted to minimize the length of traffic lines and decrease driver wait times, and their subsequent combination with deep-learning algorithms significantly shortened the computation time for finding optimized solutions. The researchers say the resulting deep reinforcement learning algorithms show potential, as they can significantly outperform conventional reinforcement learning algorithms. During the course of a full day's simulation, more than 1,000 fewer vehicles came to a full stop with deep reinforcement learning, and they spent an average of 13 seconds less in traffic during peak hours.
Connecting the Jungle and Other Remote Parts of the World
CORDIS News (09/12/16)
The European Union's TUCAN3G project is bringing 3G wireless service to previously unconnected regions of the world. TUCAN3G utilizes new wireless technologies to create access networks based on 3G femtocells, which are small, low-power cellular base stations that function as repeaters and can boost Internet signals. The TUCAN3G researchers say the advantage of using femtocells is they work via solar energy, eliminating the need for traditional energy infrastructures, which are infeasible in remote areas. For example, they say installing a classical access station could cost 40,000 euros (more than $45,000), whereas a femtocell can be bought for 500 euros (about $560). In addition, the researchers say femtocells are easy to install and can be maintained with a simple reconfiguration performed remotely. The TUCAN3G project set up a demonstration platform in a remote region of the Amazon rainforest. The locals were able to use the femtocells to communicate with relatives, coordinate healthcare services, and negotiate the price of the crops they were selling. In addition to connecting scattered villages, TUCAN3G also persuaded local governments to support the development of small, mobile rural operators linked to the Telefonica backbone, thus ensuring continuous connectivity for the villagers.
More States Mandate High Schools Count Computer Science as Math or Science
Education Week (09/13/16) Liana Heitin
Twenty U.S. states now require high school students be allowed to count a computer science course as a math or science credit toward graduation, according to a new Education Commission of the States (ECS) report. In Georgia and Utah, computer science can only count as a science credit, while nine other states categorize computer science as a math credit and the remaining nine states have ruled computer science can fulfill either math or science credits. Computer science also can fulfill a foreign language requirement in Texas. In addition, Arizona, California, and Colorado leave the decision about whether computer science can fulfill a math or science graduation requirement up to local districts. Meanwhile, Code.org has identified eight states that have allowed computer science to fulfill a math or science credit via "non-policy means," such as board resolutions or public announcements. Those states include Alabama, Indiana, Kentucky, New York, Oregon, Rhode Island, Tennessee, and Vermont, as well as the District of Columbia. State policies regarding credit are only a first step in getting more students to take computer science courses, according to report author Jennifer Dounay Zinth.
Cybathlon: World's First 'Bionic Olympics' Gears Up
BBC News (09/12/16) Melanie Abbott
Switzerland in October will host the world's inaugural Cybathlon, with 50 teams participating in sporting events to showcase bionic technologies. "Cybathlon brings together the best of prosthetic technology from around the world with innovative ideas enabling us to be more independent and productive, making it a competition against companies and research labs too," says Kevin Evison, who will be competing with a myoelectric prosthetic arm as part of a team from Imperial College London in the U.K. Evison's event is a race to see how well artificial limbs can perform six tasks, such as cutting bread or changing a light bulb, in the shortest amount of time. Meanwhile, the University of Essex team's David Rose will compete in an event in which he will try to control a computer game by thought using an electrode-studded cap that feeds his brain's impulses to a computer. The Swiss University of Technology in Zurich has organized the Cybathlon to encourage engineers to create assistive technologies that are more appropriate for disabled people, as "most of the research is with able-bodied users," says University of Essex team leader Ana Matran-Fernandez. Imperial College's Ian Radcliffe says his project's emphasis is on developing inexpensive, less error-prone prosthetics using skin-reading sensors.
Research Aims to Show How Plastic Surgery Will Really Look
University of Western Australia (09/14/16) David Stacey
University of Western Australia (UWA) researchers say they have developed a three-dimensional (3D) imaging system that will provide patients considering facial cosmetic procedures with an accurate prediction of the results. The researchers say the new system could replace misleading and unreliable before and after two-dimensional photos, which are currently being used by most health practitioners performing cosmetic work. "What we're working on is a 3D system that compares two overlaid images to produce a single and precise evaluation of the actual effects of a cosmetic procedure," says UWA professor Mohammed Bennamoun. He notes the system indicates where the changes have occurred and by how much, in association with a probability-based predictive modeling system to help the patient understand the potential changes before treatment. The researchers currently are running a trial of the first working prototype, which demonstrates changes in pre- and post-treatment 3D facial scans. They hope the new system will meet a rising demand for subtle and "natural" enhancement of personal appearance through cosmetic medical procedures.
AI Can Recognize Your Face Even When You're Pixelated
Wired (09/12/16) Lily Hay Newman
Researchers at the University of Texas at Austin and Cornell Tech have trained software that can decipher content-masking techniques such as blurring and pixelation using mainstream machine-learning methods. The researchers successfully hacked three privacy protection technologies, including YouTube's blur tool, pixelation, and Privacy Preserving Photo Sharing. Neural networks were trained to conduct image recognition when they were fed data from four large and well-known image sets for analysis. The researchers say the more words, faces, or objects a neural network is exposed to, the more proficient it becomes at identifying those targets. Once the networks achieved about 90-percent accuracy or higher for spotting relevant objects in the training sets, the researchers obfuscated the images using the three privacy tools and then further trained the networks to read blurred and pixelated images based on knowledge of the originals. Finally, the team employed obfuscated test images the neural networks had not yet seen, and for some datasets and masking techniques the networks' success rates reached as high as 90 percent. Cornell Tech professor Vitaly Shmatikov warns these methods are so widely known that a malefactor could penetrate such privacy safeguards with only a baseline of technical knowledge.
Stanford Engineers Propose a Technology to Break the Net Neutrality Deadlock
Stanford News (09/13/16) Tom Abate; Glen Martin
Stanford University engineers could significantly impact the network neutrality debate with a new technology that enables Internet users to request preferential delivery from any network or content provider, thus preserving an open Internet. "We think the best way to ensure that [Internet service providers] and content providers don't make decisions that conflict with the interests of users is to let users decide how to configure their own traffic," says Stanford professor Sachin Katti. Katti and fellow professor Nick McKeown and postdoctoral researcher Yiannis Yiakoumis say their Network Cookies solution permits users to select which home or mobile traffic should get preferential delivery, while putting network operators and content providers on an equitable level in satisfying these preferences. McKeown says Network Cookies can enable users to fast-lane or zero-rate traffic from any desired application or website, and they can be implemented without inundating the user or overburdening user devices and network operators. McKeown also stresses the tool's practical benefits for regulators, since they can help them design simple and clear policies and then assess how well different parties comply with them. "If users can pick their favorite content for favorable delivery, it's easier to ensure that user choice is respected and companies compete fairly for users' attention," Katti notes.
Faster Parallel Computing
MIT News (09/13/16) Larry Hardesty
Researchers from the Massachusetts Institute of Technology's (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL) this week are presenting Milk, a new programming language, at the 25th International Conference on Parallel Architectures and Compilation Techniques in Haifa, Israel. With Milk, application developers can handle memory more efficiently in programs that manage scattered datapoints in large datasets. Tests on several common algorithms showed programs written in Milk topped the speed of those written in existing languages by a factor of four, and the CSAIL researchers think additional work will boost speeds even higher. MIT professor Saman Amarasinghe says existing memory management methods run into problems with big datasets because with big data, the scale of the solution does not necessarily rise in proportion to the scale of the problem. Amarasinghe also notes modern computer chips are not optimized for this "sparse data," with cores designed to retrieve an entire block of data from main memory based on locality, instead of individually retrieving a single data item. With Milk, a coder inserts a few additional lines of code around any command that iterates via a large dataset looking for a comparatively small number of items. The researchers say Milk's compiler then determines how to manage memory accordingly.
Newly Funded Project Sets Stage for Next Generation of Supercomputers
Georgia Tech News Center (09/07/16) Devin M. Young
Georgia Institute of Technology (Georgia Tech) researchers are leading a $2.4-million project to develop new computer algorithms for solving linear and nonlinear equations that could lead to next-generation supercomputers. The project, which is funded by the U.S. Department of Energy, also includes researchers from Sandia National Laboratories, Temple University, and the University of Tennessee. The researchers want to replace the current generation of mathematical tools used to determine solutions to particular problems, which are being impeded by "synchronous operations." These operations also create a bottleneck due to the sequence in which processors must perform their calculations. The new approaches propose "asynchronous" techniques that enable each processor to operate independently, proceeding with the most recently available data, instead of waiting to sync with the remaining processors. "We've brought together the top people in the U.S. with expertise in asynchronous techniques as well as experience needed to develop, test, and deploy this research in scientific and engineering applications," says Georgia Tech professor Edmond Chow. He says the three-year project is part of the U.S. government's initiative to build an exascale supercomputer by 2023.
A Fractional Micro-Macro Model for Crowds of Pedestrians Based on Fractional Mean Field Games
Phys.org (09/12/16) Yan Ou
Although classical physics and calculus have traditionally been used to analyze big crowds in emergency situations, researchers in China found that a branch of mathematics called fractional calculus could offer a more realistic picture of crowd dynamics. An intuitive way of modeling crowds is to think of each person as an individual particle, an approach that enables researchers to use the language of Newtonian physics and differential and integral calculus to describe how people behave in clusters. However, people have unique thoughts and feelings that influence how they behave in large groups, and fractional calculus addresses this by accounting for long-range interactions among particles. In fractional calculus, each object in a fractional order model is assigned a memory that persists much longer than the short-lived interactions among particles. Fractional calculus therefore provides a much more realistic picture of crowd behavior. The researchers modeled people as unique, cost-minimizing agents who navigate cooperatively or competitively, depending on the situation. The researchers found a crowd in a confined area tends to diffuse and fill the space faster in the fractional framework than in the traditional framework. However, in an emergency situation involving only a few people, pedestrians tend to interact with one another to reach a consensus before splitting up toward the exit.
The Extraordinary Link Between Deep Neural Networks and the Nature of the Universe
Technology Review (09/09/16)
Researchers at Harvard University and the Massachusetts Institute of Technology (MIT) say the reason layered deep neural networks excel at complex tasks such as face and object recognition can be found in the realm of physics and not mathematics. The mathematical concept of neural-network function entails the approximation of complex math operations with simpler ones, but the problem is there are orders of magnitude more mathematical functions than possible networks to approximate them. Harvard researcher Henry Lin and MIT professor Max Tegmark contend the reason deep neural networks get the correct answer in such scenarios is because the universe is governed by a small subset of all possible functions. They say by this implication, when physical laws are written down mathematically, they can all be characterized by functions that have a remarkable set of simple properties. Deep neural networks do not have to approximate any possible math function, but only a small subset of them, according to Lin and Tegmark. They also note neural networks leverage another universal physical law, that of complex structures often being formed from a sequence of simpler steps. Therefore, Lin and Tegmark say, the network's layers can approximate each step in the causal sequence, with each layer containing progressively more data.
A Computer Simulation to Spare Children From Heart Surgery
Fraunhofer-Gesellschaft (09/01/16)
Researchers at Germany's Fraunhofer Institute for Medical Image Computing (MEVIS) have developed software to model and compare various pediatric heart surgery interventions in advance, as part of the European Union's CARDIOPROOF project. The research team says the tool could improve treatment quality and help determine the necessity of surgery. The simulation is derived from images of a patient's heart obtained with a magnetic resonance imaging scanner, detailing the shape of blood vessels and blood flow. "Our algorithms can detect which blood pressure conditions are found in the vessels," says Fraunhofer MEVIS' Anja Hennemuth. "Important is the degree to which the blood pressure differs before and after a vascular constriction." Based on a blood flow model, specialists can duplicate and estimate various interventions by computer and monitor their impact on blood flow and pressure. "With the help of our software, clinicians can make informed decisions about which type of intervention is most appropriate and whether the intervention can be delayed or even forgone," Hennemuth says. CARDIOPROOF's organizers say the goal of the project is to develop a system for daily use in real-world clinical settings.
Q+A With the First Female Director of MIT's Largest Research Lab
Forbes (09/12/16) Peter High
In an interview, Daniela Rus discusses her objectives and the digital revolution as the first female director of the Massachusetts Institute of Technology's (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL). Rus describes CSAIL's work as pushing the envelope of computing for more than five decades, achieving pioneering milestones in the digital revolution. "Our goal is to invent the future of computing," with an overarching push to advance the science of autonomy, Rus says. She notes CSAIL projects are prioritized based on how well they could address real-world challenges, and all lab initiatives are linked by the common theme of "discovering new ways to make computers smarter, easier to use, more secure, and more efficient." Recent project areas Rus cites include big data and cybersecurity, and she says CSAIL is intensely focused on collaborating with other labs on topics MIT has prioritized, such as computation and healthcare. Rus stresses her conviction that everyone should have the ability to apply computing to problem solving, and she notes robots in particular can entice children to develop an interest in math and science. Rus argues computational thinking should be a mandatory educational component in all grades including kindergarten, with an emphasis on hands-on projects.
Abstract News © Copyright 2016 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]
Unsubscribe
|