Simulations, data analysis, and artificial intelligence (AI) can be computationally intensive, but you don’t necessarily need a supercomputer to tackle your problems. Initially designed to make video games look better, graphics processing units (GPUs) are now being used for powerful research. Our presentation will provide background on the key differences between GPUs and traditional central processing units (CPUs), describe different types of problems that are best (and worst) performance-wise for each processor, and give examples of our own work with GPU processing. Our first example is source term reconstruction, where we use sensor and weather data to calculate the most likely release location, quantity, and duration. We will also discuss how we use GPU processing with a Markov Chain Monte Carlo (MCMC) method to tackle problems that were previously thought to be too computationally intensive and slow. We use the MCMC technique to develop complex epidemiological models that provide projections for future COVID-19 cases and deaths to help decision makers. Our projections are done for approximately 400 jurisdictions, running 4 million simulations per jurisdiction, our GPU MCMC technique completes approximately 1.6 billion simulations in under 45 minutes using a developer’s workstation. Finally, we coded a GPU implementation of a plume dispersion model. Running on a single-threaded CPU, 300,000 plume simulations were completed in 319 seconds. Executing 12 million plume simulations on 4 GPUs was completed in 52 seconds, approximately 247 times faster than the single-threaded CPU solution. While GPUs are not a silver bullet that can solve all problems faster, we discuss the types of problems that are suitable for GPU processing in the hope of inspiring others to use this technology to solve challenging problems.
Keywords
M&S
Additional Keywords
GPU, Parallel Processing, Markov Chain Monte Carlo