Association for Computing Machinery
Welcome to the October 26, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


NSF Grants Bring Together Computer, Political Scientists for International Conflict Projects
UT Dallas News Center (10/23/15) Kim Horner

The University of Texas at Dallas (UT Dallas) has received nearly $2 million from the U.S. National Science Foundation for two projects focused on international conflict. The first grant includes $1.5 million to create a research tool that uses big data to provide updated information on civil protests and unrest, and international conflicts. The researchers will create a platform that can mine news feeds in multiple languages for political conflict and cooperation events, and code the locations of developments. The platform aims to drive decisions about foreign policy, international relations, civil war prevention, and human rights policies. "We will also include the latest techniques from computer science to improve how we detect new political interactions, actors, and locations of their activities," says UT Dallas professor Patrick T. Brandt. The second grant will help researchers study Colombia's efforts to protect its critical infrastructure from physical assaults and cyberattacks. The researchers will focus on a series of technologies, best practices, and emergency response principles to protect and react quickly to attacks. "We want to characterize the consequences of attacks in order to identify the right amount of resources to protect and respond to emergencies," says UT Dallas professor Alvaro Cardenas.


Graphene Key to High-Density, Energy-Efficient Memory Chips, Stanford Engineers Say
Stanford Report (10/23/15) Ramin Skibba

In three new papers, Stanford University researchers describe how graphene can be used to create memory technologies with better data density and energy efficiency than existing silicon-based memory chips. Led by Stanford professors Eric Pop and H.-S. Philip Wong, an international team of collaborators used graphene to develop alternative memory technologies. In the first paper, the researchers used graphene to create resistive random-access memory (RRAM), which has the speed of modern volatile memory technologies, but can retain its stored data when powered off. The other two papers detail using graphene to create phase-change memory, in which an electric charge is used to change the atomic structure of a special alloy of germanium, antimony, and tellurium. In both projects, the researchers say they were able to create memory that was more energy efficient than silicon chips. "With these new storage technologies, it would be conceivable to design a smartphone that could store 10 times as much data, using less battery power, than the memory we use today," Pop says. He notes the technology also could help to dramatically cut the amount of electricity used to power data centers.


UAB Research Studies Cyberattacks Through the Lens of EEG and Eye Tracking
UAB News (10/22/15) Katherine Shonesy

Researchers from the University of Alabama at Birmingham (UAB) presented a study at the recent 2015 ACM Conference on Computer and Communications Security about users' susceptibility to, and ability to detect, certain cyberattacks. The researchers sought to better understand how users respond when trying to detect malware and phishing attacks by monitoring their neural activity using electroencephalograms (EEGs), cognitive metrics, and eye-tracking technology. Nitesh Saxena, director of UAB's Security and Privacy In Emerging computing and networking Systems (SPIES) lab, says the research found users did not spend enough time analyzing phishing indicators, and often failed to detect phishing attacks, even though they seemed to be able to subconsciously tell the difference between real and fake sites. The opposite was true for malware, with users able to pay close attention to malware indicators. Co-author Alaya Neupane says the study found during the malware tests users were working hard, engaged with warnings, and heeded them the majority of the time. Users' natural attention control, considered a personality trait, was shown to be highly correlated with their ability to spot phishing messages. The researchers say their study could help other researchers develop new mechanisms to evaluate whether or not users' responses to malware and phishing warnings are likely to be reliable.


How Your Device Knows Your Life Through Images
Technology Review (10/23/15) Graham Templeton

Georgia Institute of Technology (Georgia Tech) researchers have designed an artificial neural network to identify scenes in photographs taken from the point of view of people using wearable cameras or mobile phones. They have trained the network on a set of about 40,000 images taken over a six-month period by a single individual who manually associated each image with a basic activity, such as driving, watching TV, family time, and hygiene. A separate learning algorithm enables the network to learn common associations between activities and make predictions about the user's upcoming schedule. "It can leverage deep learning, and the basic contextual information on daily activities," says Georgia Tech graduate student Steven Hickson. The team reports the computer model achieved about 83-percent accuracy in identifying activities. The researchers say the technology has the potential to track daily activities more accurately than current apps and offer more insightful services. For example, an app could use the technology to monitor eating or exercise habits and suggest possible adjustments. The technology also can learn schedules and make intelligent suggestions on the fly.


3D Map of the Brain
UNews (UT) (10/22/15) Vincent Horiuchi

University of Utah researchers have developed software that maps out a monkey's brain and creates a three-dimensional (3D) model, which they say provides a more complete picture of how the brain is wired. The researchers say the technology could help medical researchers understand how the brain's connectivity is disrupted in abnormal mental and neurological conditions such as schizophrenia, depression, anxiety, and autism. The team started with an existing software platform called Visualization Streams for Ultimate Scalability (VISUS) and adapted it to assemble high-resolution images of different sections of the brain into a 3D model. The researchers then created images of a brain using the CLARITY method, which makes the brain tissue transparent by immersing it in special hydrogels. The software scans hundreds of 3D blocks of the brain with a two-photon microscope, enabling scientists to view the scans immediately. "It really unleashes a different level of understanding of the data itself--being able to look at something fully in 3D and to rotate and look at in front and in back," says Utah professor Valerio Pascucci. He notes the software helps researchers monitor the brain scanning process to make sure no bad images are created.


How Emojis Find Their Way to Phones
The New York Times (10/20/15) Jonah Bromwich

The Unicode Consortium, which was founded in the late 1980s to create a standardized code for text characters, is attracting interest as the arbiter of new emojis. Emojis, or hieroglyphic pictures that represent thoughts, moods, and other symbols to convey messages mainly on mobile devices, have drawn controversy for various reasons, including some people's perception of them as a language. "It's not a language, but conceivably, it could develop into one, like Chinese did," says Unicode president Mark Davis. "Pictures can acquire a particular meaning in a particular culture." Linguist Tyler Schnoebelen acknowledges symbols can serve as a kind of functional equivalent of body language. The consortium will meet in May to decide whether to officially expand the emoji vocabulary with 67 new symbols, and the group says compatibility and frequency of use are among the factors it weighs when voting on which emojis to induct, while another is "completeness." Potential additions Unicode is considering include sports icons chosen to accommodate people who will text during the next Olympics. Davis notes once the expansion is approved, it is up to mobile device manufacturers to add the new emojis to their phones.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


New UW Model Helps Zero in on Harmful Genetic Mutations
University of Washington News and Information (10/22/15) Jennifer Langston

University of Washington (UW) researchers have developed a model they say can predict which genetic mutations significantly change how genes splice. The researchers say the model is the first to train a machine-learning algorithm on vast amounts of genetic data created with synthetic biology techniques. "This model can help you narrow down the universe--hugely--of the mutations that might be most likely to cause disease," says UW doctoral student Alexander Rosenberg. The team tested the model on several well-understood mutations, such as those in the BRCA2 gene that have been linked to breast and ovarian cancer. The researchers say compared to previously published models, the approach is three times more accurate in predicting how a mutation will cause genetic material to be included or excluded in the protein-making process. Using common molecular biology methods, the UW team created a library of more than 2 million synthetic "mini-genes" by including random DNA sequences, and then determining how each random sequence element affected where genes spliced and what types of RNA were produced. UW professor Georg Seelig says that larger library of synthetic data teaches the model to become smarter. The researchers have made a Web tool available to the public, and they plan to expand the approach beyond alternative splicing to other processes that determine how genes are expressed.


Image Too Good to Be True? DARPA Program Targets Image Doctoring
Network World (10/21/15) Michael Cooney

The U.S. Defense Advanced Research Projects Agency (DARPA) wants to develop an easy-to-use toolset to detect altered images. The agency says current media forensic tools lack depth and the process of media authentication, as a consequence, "is typically performed manually using a variety of ad hoc methods that are often more art than science." To make the process more rigorous and consistent, DARPA has launched the Media Forensics (MediFor) program. According to DARPA, MediFor will have three central elements. The first is looking for digital integrity indicators, elements within a digital image that could indicate it has been altered; these include edge discontinuities, blurred pixels, and repeated image regions. The second element is physical integrity indicators, aspects of the scene depicted in the image that violate the laws of physics, indicating the image has been manipulated. The technology would involve examining characteristics such as reflections and shadows in still images and kinematics in videos to see if they are consistent. Finally, semantic integrity indicators would look for other inconsistencies in an image, which might indicate, for example, the data, time, or location of the image are not correct.


Social-Network Analysis Reveals Animal Bonding Behaviors
UIC News Center (10/21/15) Jeanne Galatzer-Levy

To study the bonding behaviors of two related species, the Grevy's zebra of Africa and the onager, a wild ass native to Asia, researchers relied on a new, dynamic social-network analysis tool. University of Illinois at Chicago computational ecologist Tanya Berger-Wolf led a multidisciplinary team that created CommDy, a dynamic network computational framework. Berger-Wolf says in fission/fusion communities, individuals meet and spend time with others in different groups at different times, and the two animals' communities look similar, using a traditionally static social-network analysis. However, the zebras are few in number, limited in range, and face large predators, while onagers are relatively abundant and widespread, with no major predators and more reliable access to water. To observe the daily interactions within each of the two animal communities, researchers from the Mpala Research Center in Kenya drove repeatedly along the same route through the animals' territory to record the size, duration, and membership of different groups. The software enabled the researchers to contextualize the observed interactions. They found Grevy's zebras lived in large, stable groups, with loyalty rewarded and visiting other groups discouraged. Onagers formed smaller, less cohesive groups, with individuals able to change circles with little social cost.


New Dartmouth-Disney Device Improves Full-Color Image Projection
Dartmouth College (10/21/15)

Researchers at Disney Research and Dartmouth College say they have developed a way to display full-color images using only two black patterns printed on transparencies attached to two sides of a prism. When light passes through the prism via the first pattern, it creates a repetition of rainbows that are then filtered by the second pattern to produce a chosen full-color image. The research won the "Best Paper Award" at this month's Pacific Graphics 2015 conference in China. "In the future, this technique could allow for projectors and displays with better color fidelity or even displays, which could dynamically trade off light efficiency, color fidelity, and resolution," says Dartmouth professor Wojciech Jarosz, previously a senior research scientist at Disney Research Zurich. He notes light consists of different wavelength components, which are decomposed by the prism through dispersion and filtration. Jarosz's research is concerned with capturing, simulating, manipulating, and physically realizing complex visual appearances. His research has been incorporated into production rendering systems and has been used in the making of such feature films as Disney's "Tangled" and "Big Hero 6."


Researchers at Johns Hopkins Study Crickets' Aerial Acrobatics in Hopes of Building Better Robots
Hub (10/20/15) Phil Sneiderman

Johns Hopkins University (JHU) researchers say they have spent more than eight months studying spider crickets in an effort to develop a new generation of small but skillful jumping robots. The researchers used high-speed video cameras to determine how the crickets can leap a distance to about 60 times their body length. "We're looking at the way the spider crickets move their bodies and move their limbs to stabilize their posture during a jump," says JHU researcher Emily Palmer. The research could contribute to the design of tiny, high-jumping robots to navigate rugged, uneven ground. These types of robots would utilize a more efficient, and probably less expensive, form of locomotion compared to flying robots or humans on foot, according to Palmer. The researchers created detailed three-dimensional models depicting how each insect's body parts move during a leap and a landing. JHU professor Rajat Mittal says a new generation of jumping micro-robots modeled on these crickets might someday be able to help look for victims after a powerful earthquake or carry out other tasks without putting humans searchers at risk.


What It Will Take to Make Computer Science Education Available in All Schools
The Conversation (10/22/15) Marie desJardins

With student interest in computer science (CS) on the rise, Marie desJardins, University of Maryland, Baltimore County associate dean for Engineering and Information Technology and CS professor, sees a great need to expand CS education in the K-12 grades. "Often students who want to major in computer science...do not have the computational thinking or mathematical preparation to succeed in college-level coursework," desJardins writes. She also cites a lack of sufficient effort among educators to widen interest in computing. "The vast majority of students in the U.S. do not take even a single computer science course throughout their K-12 education," she notes. DesJardins partially attributes this to there being no states that have designated a CS class as a graduation requirement. Although many states are now striving to more deeply embed computing instruction within K-12 education, desJardins observes full statewide universal K-12 CS education remains nonexistent. "Moreover, the standards that have been adopted by states focus more on low-level skills than on abstract computational concepts, and therefore do not prepare students well for more advanced college-level computing courses," she warns. DesJardins points to a lack of qualified CS teachers as one limiting factor, while inconsistency in CS education standards across states is another.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe