As the world reeled from the COVID-19 pandemic, tech-powered innovation quickly became laser-focused on understanding and confronting the deadly disease.
“There was a remarkable moment, which seemed to happen almost overnight, when researchers who had been working on other applications changed quickly to address COVID directly,” Said Dr. Russell Caflisch, director of New York University’s Institute of Mathematical Science.
Scientists say there was a widespread desire among medical researchers to work collaboratively across disciplines, and share reliable data to quickly understand the components of the disease and create therapeutics to control it. Powering that effort would take computing power – lots of it – as well as a critical ingredient that allows technology to be confidently used as a tool to transform society: trust.
Trust must be hard-wired into the answers to our most pressing questions. Trust turns raw data into insight and insight into action, whether that’s in accelerating vaccine development to fight COVID-19, supercomputer-powered research in climate change, harnessing the scale of the cloud to accelerate online learning, or using lightning-fast processing to uncover the origins of the universe. Trust transforms society.
That was the case at SciNet4Health, a network that is part of the SciNet supercomputing consortium based at the University of Toronto. SciNet4Health's role in addressing the pandemic was elevated by AMD’s donation of one petaflop of dedicated processing power, capable of a quadrillion calculations per second.
Advancements in vaccine development, drug discovery, and genomics research are often dependent on such high-performance computing to handle complex calculations that regular computers simply can’t manage. Blending molecular, cellular, and clinical parameters from patients and feeding them into machine learning algorithms could lead to big advances very quickly.
“The need to do computation on health information is greater than ever,” said Daniel Gruner, chief technology officer at SciNet. “You need to process large amounts of data and do large simulations. If you’re using AI and machine learning to make sense of huge and diverse data, you need these big computers. It can’t be done on a small machine.”
The resources received from AMD were heavily weighted toward Instinct Accelerators that can run deep learning calculations much faster than a regular CPU. That allowed SciNet4Health researchers to quickly create and analyze data and then build more sophisticated medical models, which can lead to scientific breakthroughs in diagnostics and therapeutics.
High-performance models that process massive amounts of data and couple it to simulations to make predictions is a potential game-changer. “Jobs that used to take weeks to run on a supercomputer can now happen in a matter of hours,” said Alex Mihailidis, U of T’s associate vice-president, international partnerships.
SciNet is part of AMD’s $20-million COVID-19 HPC Fund, which provides research institutions with computing resources to accelerate medical research on COVID-19 and other diseases. HPC Fund researchers work on a wide variety of projects, from evolutionary modeling of the COVID-19 virus to understanding the spike protein that occurs before the first interaction between the coronavirus and the human cell, and large-scale fluid dynamics of COVID-19 droplets as they travel through the air.
The fund helps researchers deepen their assessment of COVID-19, as well as improve their understanding of genomics, vaccine development, transmission science, and modeling, improving our ability to respond to future potential threats to global health. Said Mihailidis: “We can’t wait a year or two for these new models to come out. We need them right now.” For a world in need of trustworthy life-saving therapeutics, technology provides that powerful accelerant.
A Giant Jigsaw Puzzle Built at Blinding Speed
Reliable high-performance computing also helps solve some of our most existential questions – like the origin of the universe.
At the CERN laboratory in Geneva, Switzerland, the Large Hadron Collider (LHC) pushes protons or ions to near the speed of light to better understand what happened just after the Big Bang, including the phenomena that gave particles their mass or substance. The LHCb experiment and the data gathering is a huge step towards understanding how the universe works.
The LHC is a 17-mile-long underground ring housed in a pipe-like structure that is cooled to ‑271.3°C – a temperature colder than outer space. Inside the accelerator, two high-energy particle beams travel at close to the speed of light before they are made to collide. They are guided around the accelerator ring by a strong magnetic field created by thousands of superconducting magnets. Just before they hit, another magnet is used to "squeeze" the tiny particles closer together to increase the chances of collisions. It’s akin to firing two needles at each other that are more than six miles apart — and doing so with such precision that they meet halfway.
The LHCb detector requires fast processing of the data, high-bandwidth access to lots of memory, and very rapid connections to the I/O devices that link the servers. Raw data arrives at an astonishing 40TB/second, and AMD EPYCTM 7742 processors collect and crunch it at a blinding speed.
“It’s like putting together a giant jigsaw puzzle, with data coming from different directions and which is being handled by different servers,” said Niko Neufeld, deputy project leader at CERN. “No single server can do the processing.”
This raw flow of data is first handled by custom FPGA cards that do the initial interpretation. “Every server is mapped to a geographic piece of the detector,” said Neufeld. “But then you need to get all the data pieces together in a single location because only then can you do a meaningful calculation on this stuff.”
Much of this data is not relevant, he notes, so the first job is sifting through the information as it arrives and pulling out the relevant results that have the highest probability of providing critical insight. This is a hugely taxing high-performance computing task with incredible insights waiting to be unlocked. The LHC has already given us discoveries as important as the Higgs-Boson - a fundamental particle that provides insight into the workings of the universe, and the discovery of which garnered the 2013 Nobel Prize for physics.
It’s just a sampling of the advances that can be applied when high-performance computing serves as a powerful tool for the inquisitiveness of the human mind. “It’s a really exciting time to be doing science,” said Chris Danforth, director of the Vermont Advanced Computing Core at the University of Vermont. “The computer is now taking us to places we never thought we’d go.” Through the combined power of human ingenuity and reliable high-performance computing, those dreams and destinations are unlimited.
This story was produced by WIRED Brand Lab for AMD.




.jpeg)