What do bacterial systems, weather processes, and galaxy formation have in common? All are studied using high-performance computing environments, where clusters of processors power accelerated analysis of large volumes of data. High-performance computers1 (HPC) support complex and data-intensive computational methods, such as deep learning algorithms used to decode functional magnetic resonance images (fMRI) at the National Institutes of Health’s HPC Biowulf, or detailed simulations of the Sun’s atmosphere performed with the specialized algorithm Bifrost at NASA’s Pleiades supercomputer, or hosting of genomic data analysis applications critical to the COVID-19 and mpox response at the Centers for Disease Control Advanced Molecular Detection program’s SciComp.
Recently, scientific disciplines have pushed computational boundaries. Drivers of this trend include researcher innovation, growing data volumes, and progress in artificial intelligence (AI), as well as advances in hardware (such as increasingly powerful graphics processing units [GPU] and field programmable gate arrays [FPGA]) and cloud service provider (CSP) services.
This growth and innovation are reflected in demand and federal funding—over half a billion dollars per year are allocated to scientific computing across agencies, with around $150 million directed to HPC resources. More than ever, ensuring scientists are fully equipped to use HPCs is critical, both to enable cutting-edge research and to maximize the effective use of resources.