On-the-Fly Data Augmentation via Gradient-Guided and Sample-Aware Influence Estimation

Suorong Yang, Jie Zong, Lihang Wang, Ziheng Qin, Hai Gan, Pengfei Zhou, Kai Wang, Yang You, Furao Shen

Published: 2025/10/1

Abstract

Data augmentation has been widely employed to improve the generalization of deep neural networks. Most existing methods apply fixed or random transformations. However, we find that sample difficulty evolves along with the model's generalization capabilities in dynamic training environments. As a result, applying uniform or stochastic augmentations, without accounting for such dynamics, can lead to a mismatch between augmented data and the model's evolving training needs, ultimately degrading training effectiveness. To address this, we introduce SADA, a Sample-Aware Dynamic Augmentation that performs on-the-fly adjustment of augmentation strengths based on each sample's evolving influence on model optimization. Specifically, we estimate each sample's influence by projecting its gradient onto the accumulated model update direction and computing the temporal variance within a local training window. Samples with low variance, indicating stable and consistent influence, are augmented more strongly to emphasize diversity, while unstable samples receive milder transformations to preserve semantic fidelity and stabilize learning. Our method is lightweight, which does not require auxiliary models or policy tuning. It can be seamlessly integrated into existing training pipelines as a plug-and-play module. Experiments across various benchmark datasets and model architectures show consistent improvements of SADA, including +7.3\% on fine-grained tasks and +4.3\% on long-tailed datasets, highlighting the method's effectiveness and practicality.

On-the-Fly Data Augmentation via Gradient-Guided and Sample-Aware Influence Estimation | SummarXiv | SummarXiv