ACM TechNews


MS Programs
 
Welcome to the November 25, 2020 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Please note: In observance of the U.S. Thanksgiving holiday, TechNews will not publish on Friday, Nov. 27. Publication will resume on Monday, Nov. 30.

ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

To view "Headlines At A Glance," hit the link labeled "Click here to view this online" found at the top of the page in the html version. The online version now has a button at the top labeled "Show Headlines."

Dr. Marianne S. Goodman, a psychiatrist at the Veterans Integrated Service Network in the Bronx, NY. Can an Algorithm Prevent Suicide?
The New York Times
Benedict Carey
November 23, 2020


The U.S. Department of Veterans Affairs (VA) is using machine learning algorithms to help identify veterans at risk of suicide. The VA's Reach Vet algorithm is the first such program used in daily clinical practice, and is designed to generate a new list of high-risk veterans every month. When someone is flagged at risk, their name appears on the computer dashboard of the local clinic's Reach Vet coordinator, who contacts them to set up a meeting. The algorithm is based on analysis of thousands of veteran suicides; it considers numerous factors in veteran medical records to focus on those with the strongest cumulative association with suicide risk. Initial results indicate that over six months, high-risk veterans more than doubled their use of VA services—and had a lower mortality rate—with Reach Net installed, compared to a control group.

Full Article
*May Require Paid Registration

Gov. Tim Walz unveiling the COVIDawareMN mobile tracking app to Minnesotans. Minnesota's Covid-19 Tracking App Will Notify Contacts Anonymously
Minneapolis Star Tribune
Jeremy Olson
November 24, 2020


Minnesota governor Tim Walz this week unveiled COVIDaware, a Bluetooth-enabled mobile app that will allow the state’s residents who test positive for Covid-19 to notify close contacts who might have been exposed to the novel coronavirus anonymously. The technology has been available for months on Apple and Google mobile platforms, but state leaders indicated that privacy concerns made them hesitant to use it. Said the University of Minnesota’s Shashi Shekhar, "It's a pull-in app, which means the data can come to your smartphone but nothing of value leaves your smartphone. The only thing that leaves your phone is a random number, which cannot be traced back to the phone that generated it."

Full Article
Computer Model to Optimize Vaccinations
Washington State University
Tina Hilding
November 24, 2020


Researchers at Washington State University, the University of Virginia, and the U.S. Department of Energy’s Pacific Northwest National Laboratory (PNNL) have developed a fast, scalable algorithm to optimize vaccine distribution in an epidemic network, potentially reducing infections up to seven-fold. The team used a simulated social contact network for residents of Portland, OR, and tested the algorithm by comparing its advised vaccination strategy with a scenario in which doctors randomly vaccinated the same number of people. The researchers applied network and graph analytics to map the problem, and identified an optimal set of vaccination nodes to minimize the effective number of network infections. PNNL's Mahantesh Halappanavar said, "Speed is a critical factor, and we now have the ability to not just compute the largest number of possible solutions but also compute them quickly and accurately, so that critical problems are addressed in as close to real time as possible."

Full Article
World's Biggest Computer Chip Simulates the Future 'Faster Than the Laws of Physics'
The Independent (UK)
Anthony Cuthbertson
November 23, 2020


Researchers claim the 1.2-trillion-transistor Cerebras CS-1 computer chip, the largest in the world, can predict future events "faster than the laws of physics produce the same result." In a recent test, the chip outperformed a supercomputer in simulating combustion within a power plant, analyzing more than 1 million variables to beat real-time forecasts. Developed by the Cerebras computer systems company in collaboration with the U.S. Department of Energy's National Energy Technology Laboratory, the CS-1's massive computing capacity will be used to train neural networks, and to execute high-fidelity simulations of real-world scenarios.

Full Article
Quantum Memory Milestone Boosts Quantum Internet Future
IEEE Spectrum
Jeremy Hsu
November 23, 2020


At France's Sorbonne University, researchers have demonstrated record storage-and-retrieval efficiency for quantum memory in a 2.5-centimeter-long cesium atom array, a step toward assembling continent-wide quantum communication networks. Earlier innovations achieved 25% efficiency at most, but the Sorbonne's Julien Laurat and colleagues realized 85% to 90% efficiency. The team showed how laser-cooled cesium atoms can store and retrieve single-photon entanglement from entangled light beams. The University of Chicago's Filip Rozpedek said, "In future large-scale quantum networks, multiple such entangled links will need to be generated at the same time in order to be able to later connect them into an end-to-end long-distance link."

Full Article

The prototype technology brings together imaging, processing, machine learning, and memory in one electronic chip powered by light. Chip Delivers Smarter, Light-Powered AI
Royal Melbourne Institute of Technology
November 18, 2020


A team of Australian, American, and Chinese researchers led by Australia's Royal Melbourne Institute of Technology (RMIT) has developed light-powered artificial intelligence (AI) in a single electronic chip. The nanoscale solution mimics how the human brain processes visual information by integrating core software with image-capturing hardware to facilitate fast on-site decisions. The chip has built-in features for capturing and automatically enhancing images, classifying numbers, and recognizing patterns and images with more than 90% accuracy. RMIT's Taimur Ahmed said, "By packing so much core functionality into one compact nanoscale device, we can broaden the horizons for machine learning and AI to be integrated into smaller applications."

Full Article

A new form of wearable infant-friendly brain mapping technology. Wearable Imaging Cap Provides Window Into Babies' Brains
University College London
November 18, 2020


Researchers at the U.K.'s University College London (UCL), Cambridge University, the Rosie Hospital, and startup Gowerlabs have demonstrated wearable brain-mapping technology for infants. Their high-density diffuse optical tomography solution for six-month-olds is a cap that delivers harmless doses of red and near-infrared light to generate three-dimensional images of brain activity, so doctors and neuroscientists do not have to put the infants through magnetic resonance imaging. UCL's Elisabetta Maria Frijia said, "The approach ... is safe, silent, and wearable, and can produce images of brain function with better spatial resolution than any other comparable technology. Our hope is that this new generation of technologies will allow researchers from a whole range of fields to learn more about how the healthy infant brain develops and establish new ways of diagnosing, monitoring, and ultimately treating neurological conditions like autism and cerebral palsy."

Full Article
Algorithm Identifies Compound Potentially Useful for Photonic Devices, Biologically Inspired Computers
National Institute of Standards and Technology
November 24, 2020


An artificial intelligence (AI) algorithm developed by a multi-institutional research team identified a potentially useful new material without requiring additional training. The self-learning CAMEO (Closed-Loop Autonomous System for Materials Exploration and Optimization) machine learning algorithm could help reduce the number of coordinated experiments and theoretical searches performed by scientists in search of new materials. It operates in a closed loop, determining which experiment to run, conducting the experiment, collecting the data, then determining which experiment to run next. Given 177 potential materials to investigate, CAMEO performed 19 different experimental cycles in 10 hours, resulting in its discovery of the new material.

Full Article

A GMC car dealership in New Jersey. GM to Sell Car Insurance, Using Data on Your Driving to Set Prices
The Wall Street Journal
Mike Colias
November 18, 2020


General Motors (GM) is launching a car insurance business that will set insurance rates according to data on driver behavior, collected remotely through the OnStar service installed in all GM vehicles sold in North America. Enrollees will agree to have their driving habits tracked, with less-expensive rates allocated to those who comply with speed limits, avoid sudden stops, and practice other good-driving behavior. OnStar Insurance Services' Andrew Rose said GM this week initiated a pilot of the program for its Arizona employees, and intends a nationwide rollout sometime next year. Said Rose, “Who knows more about your vehicle than the people who manufactured it?”

Full Article
*May Require Paid Registration
Computer Vision Can Estimate Calorie Content of Food at a Glance
New Scientist
Chris Stokel-Walker
November 20, 2020


Researchers at Germany's Karlsruhe Institute of Technology (KIT) have developed a computer-vision technique for estimating the caloric content of meals from photos. The KIT team tapped the DenseNet neural network to cross-reference images of food with a database of 308,000 photos from 70,000 recipes on a German cooking website; the method predicts meals' macronutrients based on their ingredients. KIT's Robin Ruede said, "We assume they cooked the recipe correctly, take the nutritional values, and make the model learn the correlation between the nutritional information and that image." The model's average caloric estimate is 32.6% off-base when dealing with a previously unseen image, but Ruede said it can differentiate between categories of high-calorie and low-calorie foods.

Full Article
DeepER Tool Uses Deep Learning to Better Allocate Emergency Services
Binghamton University News
Chris Kocher
November 17, 2020


Binghamton University researchers used deep learning methods to analyze the amount of time it takes for emergency services personnel to deem an incident resolved, and to suggest better resource allocation when needed. Their DeepER encoder-decoder sequence-to-sequence model employed Recurrent Neural Networks as the neural network architecture. The Binghamton team used 10 years of publicly available data from New York City, segmented to reflect emergency type and duration. Binghamton's Anand Seetharam said, "Multiple events can occur at the same time, and we would expect the timetable to resolve those incidents to be longer because the personnel, resources, and equipment are going to be shared across the incident sites. That is reflected in the resolution times. Then we use that to predict what's going to happen in the future."

Full Article
Google Launches AI Platform to Help Cities Plant More Trees
Express Computer (India)
November 23, 2020


Google has launched an artificial intelligence (AI) platform that integrates AI with aerial imagery in order to help cities view their current tree canopy coverage and plan future tree-planting initiatives. Los Angeles is the first testbed for Tree Canopy Lab, a component of the Environmental Insights Explorer platform, which was created to ease cities' ability to measure, plan, and cut carbon emissions and pollution. The platform utilizes a specialized tree-detection AI that automatically scans aerial images, identifies the presence of trees, and generates a map that details the density of tree cover. The search engine giant said Tree Canopy Lab will be made available to hundreds of cities in the future.

Full Article

A robotic demonstration. Showing Robots How to Drive a Car ... in Just a Few Easy Lessons
University of Southern California
Caitlin Dawson
November 18, 2020


Researchers at the University of Southern California (USC) Viterbi School of Engineering have developed a system that enables robots to autonomously learn complex tasks from a handful of demonstrations, even imperfect ones. The system assesses the quality of each demonstration, learning from observed mistakes and successes. The USC Viterbi team applied signal temporal logic to evaluate each demonstration's quality and automatically rank them, creating innate rewards. USC Viterbi's Stefanos Nikolaidis said this common-sense approach helps robots understand which parts of the demonstration are positive and negative, as humans do. Said Nikolaidis, "If we want robots to be good teammates and help people, first they need to learn and adapt to human preference very efficiently. Our method provides that."

Full Article
Making Databases Work: The Pragmatic Wisdom of Michael Stonebraker
 
ACM Queue Case Studies
 

Association for Computing Machinery

1601 Broadway, 10th Floor
New York, NY 10019-7434
1-800-342-6626
(U.S./Canada)



ACM Media Sales

If you are interested in advertising in ACM TechNews or other ACM publications, please contact ACM Media Sales or (212) 626-0686, or visit ACM Media for more information.

To submit feedback about ACM TechNews, contact: technews@hq.acm.org

Unsubscribe

About ACM | Contact us | Boards & Committees | Press Room | Membership | Privacy Policy | Code of Ethics | System Availability | Copyright © 2020, ACM, Inc.