MS Programs
 
Welcome to the November 20, 2020 edition of ACM TechNews, providing timely information for IT professionals three times a week.

ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

To view "Headlines At A Glance," hit the link labeled "Click here to view this online" found at the top of the page in the html version. The online version now has a button at the top labeled "Show Headlines."
2020 ACM Gordon Bell Prize Awarded for ML Method That Achieves Record Molecular Dynamics Simulation
Association for Computing Machinery
November 19, 2020


ACM named a U.S.-Chinese team of researchers recipients of the 2020 ACM Gordon Bell Prize for Deep Potential Molecular Dynamics (DPMD), a machine learning (ML)-based protocol that can simulate a more than 1 nanosecond-long trajectory of over 100 million atoms per day. The team claimed its method realizes the first efficient MD simulation of 100 million atoms with ab initio accuracy. The researchers developed a highly optimized code (GPU Deep MD-Kit), which they ran on Oak Ridge National Laboratory's Summit supercomputer. GPU Deep MD-Kit efficiently scaled up Summit, achieving 91 petaflops in double precision and 162/275 petaflops in mixed-single/half precision. The authors said, "The great accomplishment of this work is that it opens the door to simulating unprecedented size and time scales with ab initio accuracy. It also poses new challenges to the next-generation supercomputer for a better integration of machine learning and physical modeling."

Full Article

An image of a COVID-10 cell UC San Diego Leads Research That Earns Gordon Bell Special Prize
University of California, San Diego
November 19, 2020


ACM named a team led by the University of California, San Diego's Rommie Amaro and Argonne National Laboratory's Arvind Ramanathan as recipients of its Gordon Bell Special Prize for High Performance Computing-Based Covid-19 Research. The authors built an artificial intelligence (AI)-based workflow to more efficiently model the SARS-CoV-2's spike protein, and scaled it to Oak Ridge National Laboratory's Summit supercomputer. The team initially optimized the atomic-movement modeling Nanoscale Molecular Dynamics and Visual Molecular Dynamics codes on smaller cluster systems, then ran them on Summit. Layering and combining the experimental and simulation data with the AI-based protocol modeled the virus and its mechanisms in unprecedented detail. "Our methods of computing allow us to ... see detailed intricacies of this virus that are useful for understanding not only how it behaves," Amaro said, "but also its vulnerabilities, from a vaccine development standpoint, and a drug targeting perspective."

Full Article

A wheel-chair bound athlete tests the hair as it maneuvers different terrains. Cybathlon Tournament Showcases Life-Changing Tech for People with Disabilities
CNN
Aaliyah Harris
November 18, 2020


The Swiss Federal Institute of Technology, Zurich's Cybathlon global championship has competitors with physical disabilities use state-of-the-art assistive technologies to perform everyday tasks. The second Cybathlon went virtual because of the pandemic, with academic and private-sector teams competing in six disciplines. Events included the Powered Wheelchair Race, which included the low-cost, semi-autonomous A.EYE.Drive wheelchair from the U.K.'s Imperial College London (ICL); the eye movement-driven chair can detect surrounding objects and map routes around static obstacles. Another ICL team entered a bionic arm prosthesis that uses sensory feedback to recognize certain rough-feeling objects, with intuitive hand and wrist control effected via sensors that record muscle activity. Cybathlon's Annegret Kern said, "We use the competition for technology developments, to showcase what is needed for people with disabilities and to promote inclusion."

Full Article

An infographic illustrated the MCUNet system that brings machine learning to microcontrollers. System Brings Deep Learning to Internet of Things Devices
MIT News
Daniel Ackerman
November 13, 2020


Massachusetts Institute of Technology researchers have developed a system that could implement deep learning within Internet of Things (IoT) devices. The MCUNet system designs compact neural networks that supply unprecedented speed and accuracy amid memory and processing constraints. MCUNet features two critical co-designed elements for running neural networks on microcontrollers—TinyEngine, an inference engine that directs resource management; and TinyNAS, a neural architecture search algorithm that produces custom-sized networks. The University of California at Berkeley's Kurt Keutzer said this development "extends the frontier of deep neural network design even farther into the computational domain of small energy-efficient microcontrollers." He added that MCUNet could "bring intelligent computer-vision capabilities to even the simplest kitchen appliances, or enable more intelligent motion sensors."

Full Article

A photo of General Motors’ safety monitoring system that includes data on the maximum speed attained and distance driven. An Angel on the Shoulder of Your Teenage Driver (or at Least a Snitch)
The New York Times
Paul Stenquist
November 19, 2020


Many automakers currently offer driving monitors as optional or standard components which will warn parents if their teenagers are not driving well. Since introducing a monitor on its 2016 Malibu, General Motors has extended the equipment to many of its vehicles. The 2021 Trailblazer's monitor is activated by navigating through the dashboard display's menu to the "teen driver" section, with parental controls accessed and set by a personal identification number (PIN). The system records data (speed, distance, collision alerts, and more) on a report card that is accessed only after entering the PIN when the trip ends. Ford's My Key system offers similar functionality along with a low-fuel reminder, while Hyundai's Blue Link Vehicle Safeguards Alerts lets parents limit vehicle speed, hours of operation, and range.

Full Article
*May Require Paid Registration
3D Bioprinted Heart Provides Tool for Surgeons
Carnegie Mellon University
Dan Carroll
November 18, 2020


Carnegie Mellon University (CMU) researchers have fabricated the first full-size three-dimensionally (3D) bioprinted human heart model from magnetic resonance imaging data, using the Freeform Reversible Embedding of Suspended Hydrogels (FRESH) method. FRESH 3D printing injects bioink into a bath of hydrogel, which supports the object as it prints, while heat applied afterward melts the material and leaves only the bioprinted object. Bioprinting a full-scale human heart required a new 3D printer tailored to hold a sizable gel support bath, and minor software modifications to maintain the speed and fidelity of the print. CMU's Adam Feinberg said, "We can now build a model that not only allows for visual planning, but allows for physical practice. The surgeon can manipulate it and have it actually respond like real tissue, so that when they get into the operating site they've got an additional layer of realistic practice in that setting."

Full Article

Photo of Hao Zhang holding a smartphone showing apps such as Siri that can run safer, faster and with more energy. AI Makes 'Smart' Apps Faster, More Efficient
University of Saskatchewan
Federica Giannelli
November 12, 2020


An artificial intelligence (AI) computer model developed by Hao Zhang at Canada's University of Saskatchewan (USask) could potentially make "smart" applications safer, faster, and more efficient. Zhang said his model segments AI computational processes into smaller "chunks," in order to help run apps locally on the phone rather than on external servers. He ran simulations to compare the model to those used on modern phone systems, and determined that it can concurrently run multiple apps 20% faster than current commercial devices, doubling battery life. Zhang also observed that AI processes can manage data efficiently using smaller four-bit sequences with variable length, while current devices use a fixed 32-bit sequence to process data more accurately, at the cost of speed and memory storage efficiency. USask's Seok-Bum Ko said, "Shorter sequences can be used to save power and increase speed performance, but can still guarantee enough accuracy for the app to function."

Full Article
Ralph Lauren's Polo Player Goes Scannable and AR for the Holidays
The Wall Street Journal
Ann-Marie Alcántara
November 18, 2020


Fashion retailer Ralph Lauren is using scanning and augmented reality (AR) solutions to enhance the shopping experience this holiday season. Consumers can use a phone application to scan Ralph Lauren's polo player and pony logo with Snapchat's Snap camera, and generate an in-app AR experience, including a lens depicting gift boxes with red ribbons. Users can tap these virtual boxes to summon an animated logo, then record a photo including the ribbon to send to friends. Gartner's Nicole Greene said the Covid-19 pandemic has pushed retailers to create more digital experiences, and Ralph Lauren's scannable logo offers consumers a novel way to feel like insiders familiar with the brand. Ralph Lauren also is adding an online game and virtual store that emulates shops in New York and elsewhere, and said these "merchantainment" initiatives serve the firm's agenda to become a digital-first company that prioritizes the brand.

Full Article
*May Require Paid Registration
Researcher Sets Record for Quantum Chemistry Calculation
Australian National University
November 17, 2020


Australian National University's Giuseppe Barca has broken the world record for the largest Hartree-Fock calculation, using a supercomputer to predict the quantum mechanical properties of large molecular systems. Barca ran his algorithm on the Summit system at the U.S. Department of Energy's Oak Ridge National Laboratory. The calculation ran for slightly more than 30 minutes, using 26,268 Nvidia V100 graphics processing units, and modeled 20,063 water molecules at a previously impossible resolution. "The new algorithm brings quantum mechanical calculations to the next level in terms of molecular sizes, enabling us to reach scales so large they belong to the domains of biology," Barca said. "Such computational predictions open entirely new research horizons in areas where experiments are too expensive or simply impracticable. This result sets the benchmark for comparison for years to come."

Full Article

A photo of a Kar-go vehicle. Autonomous Green Robot Cars to Deliver Medicine Around London
Interesting Engineering
Chris Young
November 16, 2020


A fleet of autonomous, electrically-powered green robot vehicles has started delivering medicine to care homes in London's Hounslow borough, as part of a public trial. The Kar-go, from U.K. startup Academy of Robotics, will be the first custom-built autonomous delivery vehicle to conduct last-mile deliveries on public roads in Britain. The robot car can travel at 96 kilometers (60 miles) per hour, carry a maximum load of 48 parcels, and use artificial intelligence to sort out parcels and calculate the speediest delivery route. The initial trials will have human operators sitting inside the Kar-gos, before they eventually transition to fully autonomous driving. The vehicle will drive itself to and from the sender and recipient's address, with a smartphone application alerting the recipient upon arrival. A robotic conveyor within the Kar-go enables contact-free parcel handover.

Full Article
Could Your Robotic Vacuum Be Listening to You?
University of Maryland Institute for Advanced Computer Studies
November 18, 2020


Researchers at the University of Maryland and National University of Singapore showed that popular robotic household vacuums can be remotely hacked to serve as microphones. The team gathered data from the LiDAR navigation system in one such vacuum, and applied signal processing and deep learning methods to retrieve speech and identify TV programs playing in the same room as the device. The vacuum's LiDAR scans its surroundings via laser, and senses the light scattered back by objects of irregular shape and density. The researchers hacked the machine to control the position of the laser and transmit the sensed data to laptops through Wi-Fi; they then passed the signals through deep learning algorithms trained to either match human voices or to identify musical sequences from TV shows. Their LiDARPhone system identified and matched spoken numbers with 90% accuracy, and also identified TV shows from 60 seconds' worth of recording with over 90% accuracy.

Full Article
U.S. Senate Passes Bill to Secure Internet-Connected Devices Against Cyber Vulnerabilities
The Hill
Maggie Miller
November 18, 2020


The U.S. Senate this week unanimously passed the bipartisan Internet of Things Cybersecurity Improvement Act to strengthen the cybersecurity of Internet-connected devices. The legislation mandates that all Internet-connected devices purchased by the federal government must comply with minimum security recommendations from the National Institute of Standards and Technology. Public-sector providers of such devices also must alert federal agencies of any device vulnerabilities that could expose the government to cyberattack. Bill co-sponsor Sen. Cory Gardner (R-CO) said, "Most experts expect tens of billions of devices operating on our networks within the next several years as the Internet of Things landscape continues to expand. We need to make sure these devices are secure from malicious cyberattacks as they continue to transform our society and add countless new entry points into our networks." The legislation was passed unanimously by the House in September and now heads to President Trump for a signature.

Full Article
AI Vision Could be Improved with Sensors That Mimic Human Eyes
New Scientist
David Hambling
November 11, 2020


Oregon State University (OSU) researchers have developed a sensor that could improve artificial intelligence (AI) vision by mimicking the human eye's light-response mechanism using perovskite. Perovskite changes capacitance when lighted, and when sandwiched between a pair of electrodes the illuminated material generates an electrical spike as it charges and discharges. Only a change in illumination induces further response, and OSU's John Labram said the data compression realized by the sensor can substitute for onerous digital processing often found in smartphones. He suggested such devices could eventually enable AI systems to watch moving scenes and learn in real time, while a more immediate application would be in smart vision systems like self-driving cars and robotics. Anil Anthony Bharath at the U.K.'s Imperial College London said, "The idea of using very low-complexity circuitry to implement computation—such as orientation sensitivity—would allow on-chip visual processing with relatively low power."

Full Article
Heterogeneous Computing - Hardware and Software Perspectives
 
ACM Distinguished Speakers Program
 

Association for Computing Machinery

1601 Broadway, 10th Floor
New York, NY 10019-7434
1-800-342-6626
(U.S./Canada)



ACM Media Sales

If you are interested in advertising in ACM TechNews or other ACM publications, please contact ACM Media Sales or (212) 626-0686, or visit ACM Media for more information.

To submit feedback about ACM TechNews, contact: [email protected]