Scalable Foundation Interatomic Potentials via Message-Passing Pruning and Graph Partitioning
Lingyu Kong, Jaeheon Shim, Guoxiang Hu, Victor Fung
公開日: 2025/9/25
Abstract
Atomistic foundation models (AFMs) have great promise as accurate interatomic potentials, and have enabled data-efficient molecular dynamics simulations with near quantum mechanical accuracy. However, AFMs remain markedly slower at inference and are far more memory-intensive than conventional interatomic potentials, due to the need to capture a wide range of chemical and structural motifs in pre-training datasets requiring deep, parameter-rich model architectures. These deficiencies currently limit the practical use of AFMs in molecular dynamics (MD) simulations at extended temporal and spatial scales. To address this problem, we propose a general workflow for accelerating and scaling AFMs containing message-passing architectures. We find that removing low-contribution message-passing layers from AFM backbones serves as an effective pruning method, significantly reducing the parameter count while preserving the accuracy and data-efficiency of AFMs. Once pruned, these models become more accessible for large scale simulations via a graph-partitioned, GPU-distributed strategy, which we implement and demonstrate within the AFM fine-tuning platform MatterTune. We show that this approach supports million-atom simulations on both single and multiple GPUs, and enables task-specific large-scale simulations at nanosecond timescales with AFM-level accuracy.