A Mutil-conditional Diffusion Transformer for Versatile Seismic Wave Generation

Longfei Duan, Zicheng Zhang, Lianqing Zhou, Congying Han, Lei Bai, Tiande Guo, Cuiping Zhao

Published: 2025/9/21

Abstract

Seismic wave generation creates labeled waveform datasets for source parameter inversion, subsurface analysis, and, notably, training artificial intelligence seismology models. Traditionally, seismic wave generation has been time-consuming, and artificial intelligence methods using Generative Adversarial Networks often struggle with authenticity and stability. This study presents the Seismic Wave Generator, a multi-conditional diffusion model with transformers. Diffusion models generate high-quality, diverse, and stable outputs with robust denoising capabilities. They offer superior theoretical foundations and greater control over the generation process compared to other models. Transformers excel in seismic wave processing by capturing long-range dependencies and spatial-temporal patterns, improving feature extraction and prediction accuracy compared to traditional models. To evaluate the realism of the generated waveforms, we trained downstream models on generated data and compared their performance with models trained on real seismic waveforms. The seismic phase-picking model trained on generative data achieved 99% recall and precision on real seismic waveforms. Furthermore, the magnitude estimation model reduced prediction bias from uneven training data. These findings suggest that diffusion-based generation models can address the challenge of limited regional labeled data and hold promise for bridging gaps in observational data in the future.