Advances in modeling and algorithms, combined with growth in computing resources, have enabled simulations of multiphysics–multiscale phenomena that can greatly enhance our scientific understanding. However, on currently available high-performance computing (HPC) resources, maximizing the scientific outcome of simulations requires many trade-offs. In this paper we describe our experiences in running simulations of the explosion phase of Type Ia supernovae on the largest available platforms. The simulations use FLASH, a modular, adaptive mesh, parallel simulation code with a wide user base. The simulations use multiple physics components: hydrodynamics, gravity, a sub-grid flame model, a three-stage burning model, and a degenerate equation of state. They also use Lagrangian tracer particles, which are then post-processed to determine the nucleosynthetic yields. We describe the simulation planning process, and the algorithmic optimizations and trade-offs that were found to be necessary. Several of the optimizations and trade-offs were made during the course of the simulations as our understanding of the challenges evolved, or when simulations went into previously unexplored physical regimes. We also briefly outline the anticipated challenges of, and our preparations for, the next-generation computing platforms.
Anshu Dubey, Alan C. Calder, Christopher Daley, Robert T. Fisher, Carlo Graziani, George C. Jordan, Donald Q. Lamb, Lynn B. Reid, Dean M. Townsley, and Klaus Weide Pragmatic optimizations for better scientific utilization of large supercomputers International Journal of High Performance Computing Applications 1094342012464404, first published on November 21, 2012 [doi: 10.1177/1094342012464404].