Welcome to the November 20, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.
Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
Blue Sky Ideas Conference Track Held at ACM SIGSPATIAL 2015
CCC Blog (11/19/15) Helen Wright
The recent ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems 2015 in Seattle, WA, concentrated on advanced geospatial data research, and three papers won awards under the Computing Research Association's Computing Community Consortium's Blue Sky Ideas Conference Tracks. The first-prize winner was University of Southern California (USC) professor Yao-Yi Chiang's "Querying Historical Maps as a Unified, Structured, and Linked Spatiotemporal Source." Chiang presented a strong argument for developing methods for automatically analyzing historical maps, with a comparison between maps of a Bristol neighborhood separated by about a century, showing residents are oblivious to historical environmental pollution and the possible health risks they face every day. The second prize was awarded to "Future Connected Vehicles: Challenges and Opportunities for Spatio-temporal Computing" by the University of Minnesota's Reem Y. Ali and colleagues, which made a case for how on-board diagnostic data streams from smart connected vehicles can lead to lower emissions, fuel savings, and accident avoidance, while also reducing traffic congestion. Third prize was given to USC's Liyue Fan and colleagues for "Privacy-Preserving Inference of Social Relationships from Location Data: A Vision Paper," which discussed building useful social networks from location data while keeping location privacy constraints in mind. Fan envisions a framework that balances both privacy and utility.
What Are Your Apps Hiding?
MIT News (11/19/15) Larry Hardesty
Massachusetts Institute of Technology (MIT) researchers have found the user experience is largely unaffected by much of the data transferred between the 500 most popular free Google Android cellphone applications. MIT postdoctoral researcher Julia Rubin says about 50 percent of these "covert" communications seem to be triggered by standard Android analytics packages, which report statistics on usage patterns and program performance and are designed to help developers improve their apps. Rubin notes although the other half are not analytics-related, there still could be a good reason for covert communications; however, she says users should be notified. The MIT team's analytic tools plot out all possible ways data can flow through an app, to determine whether or not a given command to open a communication channel will result in a control signal that is routed to either the display or the speaker. An analysis of data traffic from some of the more popular apps uncovered some insight about the possible goals of their convert communications. "Where there's an element of surprise--and promise--is in the fact that you can't really localize all these covert channels to advertising and analytics, which is what one would intuitively expect," says the IBM T.J. Watson Research Center's Omer Tripp. The research was presented last week at the IEEE/ACM International Conference on Automated Software Engineering (ASE 2015) in Lincoln, NE.
The Machine-Vision Algorithm for Analyzing Children's Drawings
Technology Review (11/19/15)
Researchers at the Swiss Federal Institute of Technology in Lausanne (EPFL) have developed a machine-vision algorithm that can objectively analyze children's drawings. The researchers used the algorithm to create an "average" of children's drawings, which enabled them to examine which parts of the page children prefer to draw on, the colors they use, the color intensity, and the complexity of the drawings. In addition, the new method enables the team to see how these aspects of children's drawings vary by age and location. The researchers focused on the way children draw God. The database consists of 2,389 drawings of God by children between the ages of 5 and 15, and from a variety of cultural backgrounds and religions. The researchers note the most impressive result is a clear demonstration that the complexity of the drawing changes as children get older. The researchers also found, using the machine-vision analysis, the average images from some parts of the world tend to be above the midline of a piece of paper, indicating the children consider God to be unworldly, while images from other regions are more centered. The researchers also found children tend to use yellow in the center of the images while green appears at the bottom, and they suggest green represents earthly objects and yellow represents supernatural objects.
New Advanced Computing Systems
Charles III University of Madrid (Spain) (11/17/15)
Researchers at the Charles III University of Madrid (UC3M) are participating in an effort to improve the development of advanced computing systems for parallel heterogeneous architectures. Under the auspices of the RePhrase project, the UC3M team is focusing on solving specific problems when creating applications in parallel computers. "Parallel heterogeneous architectures are those that are used in machines that combine different computing devices, such as the familiar multi-core processors and graphics cards used to make computations," says UC3M's Jose Daniel Garcia. The researchers are paying special attention to the use of the C++ programming language, which is considered an excellent alternative for these types of devices. New software development methods are needed because the next generation of computers will have a greater number of processors with divergent features. RePhrase, which is part of the European Union's Horizon 2020 program, could lead to faster applications that consume less energy. The research could lead to applications in different fields, such as in industrial manufacturing processes, monitoring railway traffic, and diagnosing mental illness.
Researchers Develop Software for Finding Tipping Points and Critical Network Structures
An international team led by researchers at Germany's Potsdam Institute for Climate Impact Research (PIK) has developed a new software tool designed to make it easier for researchers to grapple with multiple large data sets. Called pyunicorn because it was created using the Python programming language, the tool unifies complex network theory, which relates to networks with irregular but not completely random patterns; and nonlinear time series analysis, which is used to study complex systems that play out over time in a chaotic manner, such as weather or financial markets. "Pyunicorn works like a macroscope [which], if used the right way, allows [us] to distill the essence of information from a network or time series data," says Jonathan Donges, one of the PIK researchers. Pyunicorn's main application is the analysis of data from observations, experiments, and model systems via graphs and time series. It has been used to study historical climate and fossil data to identify tipping points in the paleolithic era and the emergence of the severe pregnancy complication preeclampsia. Researchers from other institutions in Germany, Sweden, the Netherlands, the U.K., and Russia also contributed to developing the new tool.
Pioneering Research Boosts Graphene Revolution
University of Exeter (11/17/15)
University of Exeter physicists say they are collaborating with the ICFO Institute in Barcelona on pioneering new research that could help accelerate the "graphene revolution." The scientists have developed a technique to trap light at the surface of the material using only pulses of light. When trapped, the light converts into a quasi-particle called a "surface plasmon," a mixture of both light and the graphene's electrons. The researchers say they were able to steer this trapped light across the surface of the graphene without the need for any nanoscale devices. They say the research could lead to new insights about graphene and how it interacts with light. A simple device for scanning a piece of graphene and learning about its properties could be an early commercial application, but the research could pave the way for miniaturized optical circuits and faster Internet speeds. "Computers that can use light as part of their infrastructure have the potential to show significant improvement," says the University of Exeter's Tom Constant. "Any advance that reveals more about light's interaction with graphene-based electronics will surely benefit the computers or smartphones of the future."
Carnegie Mellon Building Educational Software to Teach Children Basic Skills Without a Teacher
Carnegie Mellon News (PA) (11/17/15) Byron Spice
Carnegie Mellon University's (CMU) "RoboTutor" team is participating in the $15 million Global Learning XPRIZE competition, which aims to develop a way to teach children to read, write, and do basic arithmetic without a teacher or classroom, relying only on tablet computers, each other, and some intelligent software. The CMU researchers note that in many parts of the developing world, there are not enough teachers or classrooms. "If we can develop educational technology to fill that gap, we can significantly improve the lives of the 250 million children who today can't read, write, or do basic math," says CMU professor Jack Mostow. The Global Learning XPRIZE will award a grand prize of $10 million to the team whose open source software proves best able to help children learn basic literacy and numeracy skills during a field test in East Africa. To date, nearly 200 teams from 40 countries have registered. The teams must develop their solutions by November 2016, and an expert panel will select five finalists in 2017, each of which will receive $1 million as they prepare for the field test in at least 100 African villages in 2017 and 2018.
Bringing More Memory to Quantum Communication
Yale News (11/17/15)
Researchers at Yale University and the University of Erlangen-Nuremberg Staudtstr have developed a method for fabricating longer-lasting quantum memories, building on their work last year creating a magnon gradient memory. "In our previous study, we realized the strong coupling between the magnon and the microwave photon," says Yale Ph.D. student Xufeng Zhang. "We demonstrated that the information could transfer between the two systems multiple times before it decayed. In this system, we looked at coupling to store information from microwave to magnon." The system employs a series of yttrium iron garnet spheres in a three-dimensional microwave cavity, with each sphere applied with a slightly different magnetic field so their magnons resonate at slightly different frequencies while still more or less equaling the cavity's resonance frequency. Superposition of these magnon modes creates a "bright mode," accessible by the cavity, as well as a series of "dark modes" partitioned from the outside world. "Information can couple to the bright mode and stay as the dark modes for a period of time before it evolves back to the bright mode and gets retrieved," says Yale postdoctoral fellow Chang-Ling Zou. The decoupled dark modes can prevent information decay, enabling information storage, says Yale professor Liang Jiang.
Learning From Distributed Data
National Science Foundation (11/17/15) Marlene Cimons; Aaron Dubrow
University of New Mexico professor Trilce Estrada-Piedra is developing software that will enable researchers to collaborate with each other and make use of decentralized data without jeopardizing privacy or raising infrastructure concerns. Estrada-Piedra notes the large data sets researchers use often contain personal information, such as the data found in medical records, and finding a way to reliably project and anonymize that data is key to making it useful to researchers. Similarly, such data-sets are often not housed geographically close to researchers and are too large to transfer, which means methods are needed to abstract and transmit relevant data points to researchers without having to copy or move the entire data-set. "This is a way of enabling science, meaning that researchers will more easily be able to analyze larger datasets, especially those that, for some reason, cannot be centralized," Estrada-Piedra says. The research is supported by a five-year, $412,969-grant from the U.S. National Science Foundation Faculty Early Career Development award, which Estrada Piedra received earlier this year. As part of the award, Estrada Piedra will adapt the middleware she uses in her research, Andromeda, into a distributed science application that can be used in educational settings.
Iran Demonstrates New Humanoid Robot Surena III
IEEE Spectrum (11/17/15) Erico Guizzo
Researchers at Iran's University of Tehran on Monday unveiled Surena III, their latest-generation humanoid robot, which can walk, mimic a person's arm gestures, and stand on one foot while bending backwards. Surena III will be used as a platform for investigating bipedal locomotion, human-robot interaction, and other robotic challenges, according to University of Tehran professor Aghil Yousefi-Koma. He thinks one of Surena III's most useful potential applications could be in disaster conditions. The 1.9-meter-tall, 216-pound robot features light-emitting diode eyes, a Kinect-based three-dimensional vision module, and 31 servomotors powering its joints, with human operators supervising its functions with software based on the Robot Operating System (ROS). Surena III's vision system enables the robot to detect faces and objects and track a person's motions, and its speech system can recognize certain predefined sentences in Persian. Yousefi-Koma says the ROS addition "enables the robot to simultaneously communicate with the environment, manage its behaviors, monitor its sensors, and detect unwanted faults in the system." The third-generation machine can walk at a pace nearly 10 times that of its original iteration, at 0.2 meters/second. Other advancements Yousefi-Koma cites include Surena III's ramp- and stair-climbing ability, as well as its object-grasping proficiency and ability to adapt to uneven terrain.
Bumblebees Are Teaching Smart Cars How to Drive
Motherboard (11/15/15) Melissa Cronin
The U.S. National Science Foundation has allocated a $300,000 research grant to a Worcester Polytechnic Institute (WPI) project that will study bumblebee navigation and use that information to help improve the safe operation of smart cars. The project seeks to enable "connected vehicles" that communicate to each other via some form of channel so they exchange data about their velocity, direction, and intentions, to boost both driver and pedestrian safety. Bumblebees are being researched as a model for this navigational method because they share information with each other and then act on it individually. The project is a collaboration between WPI researcher Alexander Wyglinski and biologist Robert Gegear, in which the latter will study bumblebee foraging behavior, and the former will feed the resulting data to models for connected vehicles. "Evolution has primed these types of insects to survive in [the] real world," Wyglinski says. "We're just borrowing what mother nature has polished." Wyglinski predicts the initiative will help him deploy and test a connected vehicle network based on the bumblebee data prior to a real-world implementation. He anticipates cars being equipped with connected vehicle systems within five years.
Angelica Lim: Flutist. Global Roboticist. Proud Master of a Robot Dalmatian Named Sparky.
TechRepublic (11/18/15) Hope Reese
Angelica Lim, who works as a "developer on emotion recognition" on the Pepper robot at Aldebaran Robotics in Paris, specializes in building robots that can identify and express emotions. "Our machines, our phones...they can't empathize," Lim says. "I think that's the biggest hurdle. We want a robot to be compassionate." Lim sees such machines helping to coach or encourage people to seek out social contact, citing as an example robots designed to detect sadness so they can cheer people up. Lim also says the idea is not to replace people with robots, noting they could be very useful as companions for the elderly at times when they are alone, to name one example. "There's a field of robotics called 'developmental robotics' that says that robots will become intelligent by learning expression like children do," Lim says. "The idea is that expressions are developed through time and interactions." Lim envisions robots learning to recognize emotions and respond appropriately in a similar manner. She says the biggest misconception people have about robots and emotions might be about what emotions they talk about. "People think a robot will 'get emotional,' which has a negative connotation," Lim says. "It's important to think about the compassionate part. Being able to share emotions."
Abstract News © Copyright 2015 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.