Welcome to the January 9, 2017 edition of ACM TechNews, providing timely information for IT professionals three times a week.
ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
Back to Self-Driving School: The Simulator Teaching Vehicle AIs Road Sense
ZDNet (01/09/17) Anna Solana
Researchers at the Autonomous University of Barcelona's (UAB) Computer Vision Center in Spain have developed Synthia, a simulator employing convolutional neural networks and deep learning to enhance how vehicle artificial intelligence (AI) systems manage environmental factors such as obstacles and situations. UAB professor Antonio Lopez says the project originally focused on detecting pedestrians based on commercial video games. "Now with the sensors we use, we can see what the content of each pixel in an image is," Lopez notes. "We also know how far these objects are from the camera, which is crucial information for vision systems." Vehicle AIs are being trained on a massive image dataset to recognize various elements and differentiate between key objects despite poor visibility, for example; the software utilizes this labeled information to interpret input from the vehicle's cameras and formulate a response. "We've modeled an autonomous car within Synthia so we can make tests and be sure the vehicle does execute the orders it's receiving," Lopez says. He sees the "complex and uncontrollable" urban environment as the main challenge for self-driving cars, but still envisions a partial rollout of such vehicles within a decade. Lopez's team plans to further augment Synthia to manage more data and different types of situations.
UA-Developed Avatar Is Helping to Screen New Arrivals at Bucharest Airport
UA News (AZ) (01/09/17)
University of Arizona (UA) researchers have developed the Automated Virtual Agent for Truth Assessments in Real-Time (AVATAR) system, which uses a virtual border agent to question international travelers and then flag those that seem suspicious. AVATAR has been installed in a kiosk at Henri Coanda International Airport in Bucharest, Romania. The avatar conducts brief interviews with travelers after they disembark from flights into Bucharest, monitoring their body language and verbal replies to identify irregular behavior that warrants further investigation. The avatar speaks to travelers in their native language and asks country-specific visa questions while measuring behavior, physiology, and verbal responses. Its screening technology could be used at land ports of entry, airports, detention centers, and visa processing offices. Border security experts from Romania and European Union Agency member states also are involved in the field test, as well as students from the Alexandru Ioan Cuza Police Academy and researchers from several European universities. "We are thrilled to get the AVATAR into a real-world testing scenario and to see how people interact with the technology in an airport setting," says Jay Nunamaker, director of UA's National Center for Border Security and Immigration and principal investigator for AVATAR.
Complexity Theory Problem Strikes Back
Quanta Magazine (01/05/17) Erica Klarreich
University of Chicago professor Laszlo Babai has retracted a claim made last year that he had produced a "quasi-polynomial" algorithm for graph isomorphism because of a subtle error within its core argument. The graph isomorphism problem requires an algorithm that can detect whether two graphs--networks of nodes and edges--are the same graph in disguise. After more than a year of close analysis by other computer scientists, Babai disclosed that although his algorithm is still viable with a few modifications, it does not run as rapidly as he originally thought. He said the nature of the algorithm is "sub-exponential," meaning the graph isomorphism challenge is still a "hard" computer science problem from the perspective of computer efficiency. Nevertheless, Babai's algorithm is substantially faster than the previous best algorithm for graph isomorphism, which had gone unchallenged for more than three decades. "It's still a massive improvement over the previous state of the art," says University of Texas at Austin professor Scott Aaronson. He predicts computer scientists familiar with Babai's approach will attempt to determine whether further improvements can be derived from it.
Most Computer Science Majors in the U.S. Are Men. Not So at Harvey Mudd
Los Angeles Times (01/04/17) Rosanna Xia
More than 84 percent of U.S. undergraduates who major in computer science are men, according to the Computing Research Association (CRA). However, at Harvey Mudd College, 55 percent of the most recent class of computer science graduates were women, compared with only 10 percent a decade ago. Harvey Mudd bucked the national trend by changing how its professors taught computer science, making quizzes more fun and creating homework assignments designed to bring groups of students together to solve problems. The revamped curriculum has since been adopted by other schools, which are trying to broaden the subject's appeal. Starting in 2005, Harvey Mudd's computer science professors identified three key reasons female students did not major in computer science: they did not think they would be good at it, they could not imagine fitting into the culture, and they just did not think it was interesting. To help solve these problems, the professors asked more advanced students to let others answer questions and participate in class. They also divided the introductory computer science course into sections based on prior experience. "Building confidence and a sense of belonging and a sense of community among these women makes such a huge difference," says Harvey Mudd president Maria Klawe, a former president of ACM.
Model Sheds Light on Purpose of Inhibitory Neurons
MIT News (01/09/17) Larry Hardesty
A new computational model of a brain's neural circuits is helping researchers at the Massachusetts Institute of Technology's (MIT) Computer Science and Artificial Intelligence Laboratory understand the biological role of inhibitory neurons. The team designed an artificial neural network that includes nodes mimicking the function of inhibitory neurons, which keep other neurons from firing in the brain. The model describes a neural circuit consisting of input neurons and an equivalent number of output neurons. The new circuit enables signals to pass between inhibitory and output neurons, and then performs a "winner-take-all" operation, in which signals from multiple input neurons select a single output neuron to activate. The researchers found two inhibitory neurons are sufficient to induce this operation; one neuron acts as a convergence neuron, sending a strong inhibitory signal if more than one output neuron is firing. The stability neuron sends a much weaker signal to prevent additional output neurons from becoming active once the convergence neuron has been deactivated. The model lends itself to further research into the usefulness of computational analysis in neuroscience. "There's a close correspondence between the behavior of networks of computers or other devices like mobile phones and that of biological systems," says MIT professor Nancy Lynch.
'Transfer Learning' Jump-Starts New AI Projects
InfoWorld (01/09/17) James Kobielus
Abstracting and reusing knowledge gleaned from a machine-learning application in other, newer apps--or "transfer learning"--is supplementing other learning methods that constitute the backbone of most data science practices. Among the technique's practical uses is productivity acceleration modeling, which is viable when prior work can be reused without extensive revision in order to speed up time to insight. Another transfer-learning application involves the method helping scientists produce machine-learning models that exploit relevant training data from prior modeling projects. This technique is particularly appropriate for addressing projects in which prior training data can easily become obsolete, which is a problem that frequently occurs in dynamic problem domains. A third area of data science in which transfer learning could yield benefits is risk mitigation. In this situation, transfer learning can help scientists leverage subsets of training data and feature models from related domains when the underlying conditions of the modeled phenomenon have radically changed. This can help researchers ameliorate the risk of machine-learning-driven predictions in any problem domain vulnerable to extremely improbable events. Transfer learning also is critical to data scientists' efforts to create "master learning algorithms" that automatically obtain and apply fresh contextual knowledge via deep neural networks and other forms of artificial intelligence.
Ultrasound Tracking Could Be Used to Deanonymize Tor Users
BleepingComputer (01/03/17) Catalin Cimpanu
Transistor Stretchier Than Skin for Ultra-Flexible Wearable Tech
New Scientist (01/06/17) Timothy Revell
Stanford University researchers have developed a new transistor that can be stretched to twice its length without losing conductivity, making it well-suited for use in small devices worn on the body. "In the near future we will be able to make wearable electronics that are stretchable and able to conform to the human body," says Stanford professor Zhenan Bao. The new transistors were developed by confining conductors inside a very thin and flexible polymer material. Bao notes after 100 stretches the transistors showed no signs of cracking and their conductivity reduced only very slightly. The researchers demonstrated the technology by creating a simple electronic device that is worn around the knuckle of a finger and turns a small light-emitting diode on and off. "There have been other attempts at creating stretchy transistors, but this team has managed to make them in a cheap and easily replicated way," says Niko Munzenrieder at the University of Sussex in the U.K. Munzenrieder says although the advantages of the Stanford-developed transistor come at the cost of some electrical performance, they will still be good enough for a range of applications.
UCI Introduces iRain Smartphone App
UCI News (01/05/17) Brian Bell
Researchers at the University of California, Irvine, (UCI) have developed iRain, a new application that uses the university's weather tracking and analysis system to provide smartphone users with satellite rain data. The app features a tool that displays the top 50 current extreme weather events around the world, animations that show varying levels of rainfall intensity and movement, a function to choose different time zones, and a tool to zoom into a local area. Users also can enter their rain or snowfall observations to join a globe-spanning cadre of citizen hydrologists. "The beauty of iRain is that it's an access point for an entire system that detects, tracks, and studies precipitation on our planet," says UCI professor Phu Nguyen. Nearly 20 years ago, Kuo-lin Hsu developed an algorithm for the retrieval of rainfall data from satellite images. With the launch of iRain, UCI demonstrated it is possible to reduce the wait time between retrieval of data, processing, and distribution through government servers to about an hour. The free app is available for iPhone and Android devices, and users from more than 180 countries are now accessing the rain data.
Turning Your Living Room Into a Wireless Charging Station
Duke University News (01/04/17) Ken Kingery
Engineers at Duke University, the University of Washington (UW), and Intellectual Ventures' Invention Science Fund have proposed a new wireless system that can automatically and continuously charge any device anywhere within a room. The system would operate in the Fresnel zone, a region of an electromagnetic field that can be focused, enabling power density to reach levels sufficient to charge many devices with high efficiency. The technology would be built with metamaterials, a synthetic material composed of many individual, engineered cells that together produce properties not found in nature. A device no bigger than a typical flat-screen television should be able to focus beams of microwave energy down to a spot about the size of a cellphone within a distance of up to 10 meters, and simultaneously power more than one device. However, a powerful, low-cost, and efficient electromagnetic energy source still must be developed, and the system would have to automatically shut off if a person or a pet were to walk into the focused electromagnetic beam. The team says such issues are not roadblocks, as the system could be embedded in the ceiling. "The ability to safely direct focused beams of microwave energy to charge specific devices, while avoiding unwanted exposure to people, pets, and other objects is a game-changer for wireless power," says UW professor Matt Reynolds.
Asian Scientist (01/03/17)
Social robots designed to interact with people could act as caregivers for the elderly and children, but current models struggle to understand and mimic the subtleties of human interaction. Nadine, a humanoid robot created at Nanyang Technological University (NTU) in Singapore, can remember names and past conversations and initiate conversations with people. Although such robots can be programmed to identify people, objects, and words, they are largely unable to detect a multitude of nuances, such as sarcasm, body language, facial expression, and tone of voice. "When there are many people around Nadine, she has to decide who to look at, who to listen to and, if there's a discussion, when to speak and why," says NTU professor Nadia Thalmann. "Research on multi-party interactions is very complex." One popular method used to address such challenges involves collecting massive datasets to teach the robots. Xiaoice, an artificial intelligence (AI) designed by Microsoft's Applications and Services Group East Asia, has had more than 10 billion online chats with people, many of whom initially did not realize they were talking to a robot. Microsoft launched a similar chatbot called Tay in March, but the AI was taken down after it starting repeating racist and hateful language learned from its analysis of Twitter data.
Attackers Can Make It Impossible to Dial 911
The Conversation (01/04/17) Mordechai Guri; Yisroel Mirsky; Yuval Elovici
A new cyberattack strategy can block access to 911 emergency services by exploiting vulnerabilities in the system, according to researchers at Ben-Gurion University of the Negev in Israel. The method involves tying up all available phone line connections with malicious traffic, making it impossible for legitimate information to get through. The most common vector for such an attack is the spread of malware to computers, including smartphones, so they can be remotely hijacked. Attackers can then tell the commandeered devices to flood a particular site or phone number with traffic. To test a 911 denial-of-service attack scenario, the researchers set up simulations of North Carolina's 911 infrastructure, and of the U.S. emergency-call system. The team was able to significantly curtail 911 service in North Carolina with only 6,000 infected mobile phones, making it possible to block 911 calls from 20 percent of state landline callers and 50 percent of mobile customers. Meanwhile, the researchers say only 200,000 infected smartphones could wreak comparable havoc on a national scale. Moreover, the researchers warn a federal mandate that mobile phone companies forward all 911 calls directly to emergency dispatchers creates a serious vulnerability that hackers could take advantage of to enact 911 denial-of-service attacks.
Abstract News © Copyright 2017 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: firstname.lastname@example.org