Association for Computing Machinery
Welcome to the August 10, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


University Technology Program Launched to Give Peace a Chance
IDG News Service (08/07/15) Grant Gross

Drexel University and the Washington, D.C.-based PeaceTech Lab have formed an alliance called the Young Engineers Program to give computer science and engineering students and researchers opportunities to apply their skills in conflict zones to prevent violence, according to PeaceTech Lab CEO Sheldon Himelfarb. He says most of the participating students will be digitally savvy, noting "we've really learned that having the eyes and ears of this younger generation of technologists and engineers is crucial." Himelfarb reports the lab is seeking new concepts for using social media and data-crunching tools to track and resolve outbreaks of violence. He says the pervasiveness of mobile phones in conflict zones means mobile applications could be used to aid resolution efforts. Drexel sophomore Dagmawi Mulugeta sees the Young Engineers Program as a chance for him to serve his native country of Ethiopia, and dean of the Drexel College of Engineering Joseph Hughes says the program will deliver most of its classes over the Internet so people outside Drexel's main campus can participate. Hughes also notes the initiative is in line with the school's mission of being the "most civically engaged university in the country."


3D Cursors Sculpt at SIGGRAPH
EE Times (08/09/15) R. Colin Johnson

University of Montreal researchers demonstrated a system that uses a tablet to control a three-dimensional (3D) cursor that can be used to draw and manipulate objects in 3D simulations at the SIGGRAPH 2015 conference on Sunday in Los Angeles. The technique uses back-and-forth and up-and-down motions of the tablet to move the cursor through a 3D environment, while also incorporating pinching and other gestures. The cursor works in conjunction with the Hyve-3D (hybrid virtual environment in 3D) simulator, which also was developed by the University of Montreal team. The immersive images from Hyve-3D are projected onto a 16-foot spherically concave fabric screen, although its simulations also can be interacted with on a two-dimensional screen. Multiple users can use the combination of Hyve-3D and the 3D cursor to simultaneously sketch, select, edit, and manipulate design elements in the same simulation, either using a shared window or on personal windows. The technology is expected to be used by architectural designers, medical imaging groups, and computer gamers. "The techniques we're unveiling today involve using a tablet to control the cursor, but as it does not necessarily rely on external tracking of the user's movements, eventually other devices could be used, such as smartphones or watches," notes University of Montreal professor Tomas Dorta.


Teaching Machines to Understand Us
Technology Review (08/06/15) Tom Simonite

Facebook hired Yann LeCun, a pioneer of deep-learning research, to head its new artificial intelligence (AI) laboratory and develop software with the language skills and common sense to engage people in basic conversation. LeCun thinks deep learning can create software that understands people's sentences and can respond with appropriate answers, clarifying questions, or suggestions of its own. He speculates deep-learning systems based on neural networking could become sufficiently familiar with humans "to understand not just what people would be entertained by, but what they need to see regardless of whether they will enjoy it." A neural network can "learn" words by mining text and calculating how each word it encounters could have been predicted from the words preceding or following it. The software learns to represent every word as a vector indicating its relationship to other words. The same strategy is applicable for whole sentences, and LeCun's group is working on imbuing computers with common sense via the development of a deep-learning memory network. The network has a memory bank to store learned facts, and the software was trained to answer questions about a simple text. Some experts doubt LeCun's work will meet the AI conversation challenge, as the software would need to accommodate the tendency for words and sentences' meanings to shift depending on context, to name one example.


UAH Developing Architecture to Build Design-Phase Cybersecurity Into Systems
UAH News (08/06/15) Jim Steele

University of Alabama in Huntsville (UAH) researchers are developing Dielectric, a lightweight virtualization architecture that can be used to build cybersecurity into systems used in the Internet of Things. The UAH team is funded by a one-year, $299,622 U.S. National Security Agency grant. The researchers say Dielectric will move the inclusion of cybersecurity features forward, into the design process of the product. The research also will leverage prior research on safety-critical systems. "While finding flaws and repairing them will continue to be an important focus in cybersecurity research, this proposal focuses on an architectural approach to building security into the system at the outset," says UAH professor David Coe. UAH professor Letha Etzkorn notes embedded systems are expected to connect to the cloud, and she says her team is "examining security methodologies that can apply both at the embedded systems level and the cloud level." The researchers also will study how Dielectric can be used in automobiles. "We have already together submitted multiple additional proposals to government agencies to support other aspects of this research and we intend to continue this effort in searching for additional funds," Etzkorn says.


Q&A: San Francisco Expands Computer Science Classes
Education Week (08/04/15) Liana Heitin

The San Francisco school district last month announced a plan to bring computer science instruction to all of its students at all grade levels. The plan is one of the most ambitious such initiatives in the U.S., with the goal of making computer science education available to even prekindergarten students. The initiative is being funded by the school district, industry partners, and a $5-million deal with the Salesforce.com Foundation. In an interview, James Ryan, the district's executive director for science, technology, engineering, and mathematics, says the plan is an effort to not only arm the district's students with what he says is "now becoming a basic skill," but also to broaden the demographics of people studying computer science. Initially, pre-K through 5th grade students will receive 20 hours a year of computer science instruction, or one class a week for a semester. Middle school students will receive 45 hours a year, or a quarter-long course. Computer science education will not be mandatory in high school, but the initiative aims to ensure that robust computer science classes are available at all of the district's high schools. Ryan expects the new program to be fully underway within three to five years.


Penn Research Helps Develop Algorithm Aimed at Combating Science's Reproducibility Problem
Penn News (08/06/15) Evan Lerner

University of Pennsylvania researchers are developing new data-mining tools designed to make it easier to know which information is relevant, and when a correlation that seems to have predictive value actually does not because it results only from random chance. The researchers say the tools provide a method for successfully testing hypotheses on the same data without compromising statistical assurances that their conclusions are valid. The method could increase the power of analysis done on smaller datasets, by identifying ways researchers can come to a "false discovery." "One thing you could do is get a totally new set of data for every time you test a hypothesis that is based on something you've tested in the past, but that means the amount of data you need to conduct your analysis is going to grow proportionally to the number of hypotheses you are testing," says Pennsylvania professor Aaron Roth. The researchers say they developed a "reusable holdout" tool that enables scientists to query through a "differentially private" algorithm, instead of testing a hypothesis on the holdout set directly. Using the tool means any findings that would rely on idiosyncratic outliers of a given set would disappear when looking at data through a differentially private lens.


Software Checks If Your Brain Is Busy Before It Interrupts You
New Scientist (08/05/15) Aviva Rutkin

A Tufts University software project called Phylter assesses a person's mental state before letting emails or text messages through to their device. The software screens out such low-priority distractions when it senses a person is focused on a more important task. The researchers say they hope Phylter can help people avoid multitasking and focus better on the task at hand. Tufts professor Robert Jacob says his team wants to set a "little dial and you can tell it, 'Now I'm kind of busy, so leave me alone,'" without the user having to do it themselves.  Phylter relies on functional near-infrared spectroscopy to read brain activity. A band worn around the forehead beams light into the head and measures what is reflected. The data reveals changes in blood flow in the prefrontal cortex, indicating whether a person is occupied with something or staring into space. A machine-learning algorithm calibrates the system to the wearer's brain. Jacob's team linked Phylter to Google Glass and invited people to wear the device while playing a computer game. When fake notifications came in, players had to determine whether or not to take them, and their input helped to teach the system whether something was too important to screen out or okay to ignore.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Artificial Intelligence Decodes Islamic State Strategy
BBC News (08/06/15) Chris Baraniuk

Computational analysis has identified some patterns in the complex and dynamic strategy of the Islamic State militant group. Researchers in the U.S. say they have used an algorithmic system to analyze 2,200 recorded incidents of the extremist group's activity in the second half of 2014. Artificial intelligence (AI) revealed the jihadists shifted away from a large infantry-style operation to roadside bomb attacks when hit with air strikes. The militants also stepped up vehicle-borne bombs prior to large infantry operations. "We believe this relationship is because they want to prevent reinforcements from the Iraqi army getting out of Baghdad," says Arizona State University's Paulo Shakarian. AI also revealed a sharp increase in arrests by the jihadists following air strikes, which might be a retaliatory attempt to weed out Syrian intelligence agents. The research could be useful to forces targeting the militants, according to industry experts.


UMD Gets Grant to Study Cryptocurrency
Diamondback (MD) (08/05/15) Samantha Reilly

The University of Maryland (UMD), Cornell University, and the University of California, Berkeley will share a $1,935,783-grant from the U.S. National Science Foundation (NSF) to pursue research on cryptocurrency, digital, and encrypted systems of money. Researchers and faculty will collaborate as part of the Initiative for Cryptocurrency and Contracts, says NSF's Andrew Dubrow. The program promotes further research on security and development in the field, which continues to grow, despite lingering questions about widespread implementation. "One thing that's nice about [the grant] is that it brings together people with different expertise to work together on this problem," says UMD professor and co-principal investigator for the project Jonathan Katz. The most widely used cryptocurrency is Bitcoin, which has grown in popularity globally since its anonymous creation in 2009. UMD researchers intend to examine the security, protocol, use, and potential improvements of Bitcoin and other cryptocurrencies. "If we don't fully understand the fundamental security properties that it achieves, we could have some potential problems down the road," Katz says.


Catching a Cold: Researchers Identify New Techniques for Diagnosing Respiratory Abnormalities
UGA Today (08/05/15)

A new technique could help clinicians connect the flu to a rare genetic disorder related to cilia, the small hairs that protrude from cells throughout the human body. Ciliary dyskinesia causes the cilia to not move properly and clear out mucus, which can cause flu-like symptoms and other respiratory problems. Researchers currently analyze their motion with video microscopy, and make determinations based on their own training and experience, which is subjective, laborious, and error-prone. The new techniques from University of Georgia computer science professor Shannon Quinn and colleagues automate the final step in the process of diagnosing ciliary dyskinesia. "It provides a quantitative definition that is relevant across clinics, across research institutions, and it's all automated so that we have a direct comparison between motion types," Quinn says. She believes the technique will lead to more accurate early diagnosis of ciliary dyskinesia. "To be able to attach numbers to the motion introduces a higher degree of certainty in diagnosing the abnormalities," Quinn notes. "There is no cross-institutional commonality for making the diagnoses. So our goal was to provide a quantitative baseline for that particular step in the diagnostic process."


Artificial Intelligence Improves Fine Wine Price Prediction
University College London (08/05/15) Bex Caygill

University College London (UCL) researchers have developed an artificial intelligence approach that can more accurately predict the price fluctuations of fine wines. The study found more complex machine-learning methods outperformed other simpler processes commonly used for financial predictions. The researchers applied the new approach to 100 of the most sought-after fine wines from the Liv-ex 100 wine index, and found it was able to predict prices with greater accuracy than existing methods by learning which information was important amongst the data. "We've created intelligent software that searches the data for useful information, which is then extracted and used, in this case for predicting the values of wines," says UCL professor John Shawe-Taylor. The researchers tested Gaussian process regression and multi-task feature learning, which are two separate forms of machine learning that are able to extract the most relevant information from a variety of sources. The study found machine-learning methods based on Gaussian process regression can be applied to all the wines in the Liv-ex 100 with an improvement in average predictive accuracy of 15 percent relative to the most effective of the existing methods. Meanwhile, when multi-task feature learning was applied, the accuracy of predictions increased by 98 percent relative to more standard benchmarks.


Artificial Intelligence Is Already Weirdly Inhuman
Nautilus (08/06/15) David Berreby

Artificial intelligence (AI) systems such as neural networks are capable of incomprehensible behavior, and not knowing why they behave in such a manner is a challenge that must be resolved if AI is to be rendered predictable, especially in failure. For example, a neural net may identify two pictures of the same subject that differ only slightly as two different subjects. The problem is that, unlike with humans, it is impossible to determine why AIs make such errors so researchers can reverse-engineer the process. "We need to be prepared to accept that computers, even though they're performing tasks that we perform, are performing them in ways that are very different," says Solon Barocas, a postdoctoral research associate at the Center for Information Technology Policy at Princeton University. The layered architecture of a neural net, which is trained by human programmers, enables processing that can detect patterns in vast volumes of data and match those patterns to the right images. However, this schematic means errors or misidentification cannot be explained because humans cannot yet determine what computer-created rules or criteria the AI is following. Various research teams have created methods to make neural nets reveal what their architectural layers and even individual neurons are doing when performing operations. University of Wyoming professor Jeff Clune thinks the quirks of neural-net cognition can lead to fascinating insights on how computers think.


Abstract News © Copyright 2015 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe