Towards Adversarial Training under Hyperspectral Images

Weihua Zhang, Chengze Jiang, Jie Gui, Lu Dong

Published: 2025/10/1

Abstract

Recent studies have revealed that hyperspectral classification models based on deep learning are highly vulnerable to adversarial attacks, which pose significant security risks. Although several approaches have attempted to enhance adversarial robustness by modifying network architectures, these methods often rely on customized designs that limit scalability and fail to defend effectively against strong attacks. To address these challenges, we introduce adversarial training to the hyperspectral domain, which is widely regarded as one of the most effective defenses against adversarial attacks. Through extensive empirical analyses, we demonstrate that while adversarial training does enhance robustness across various models and datasets, hyperspectral data introduces unique challenges not seen in RGB images. Specifically, we find that adversarial noise and the non-smooth nature of adversarial examples can distort or eliminate important spectral semantic information. To mitigate this issue, we employ data augmentation techniques and propose a novel hyperspectral adversarial training method, termed AT-RA. By increasing the diversity of spectral information and ensuring spatial smoothness, AT-RA preserves and corrects spectral semantics in hyperspectral images. Experimental results show that AT-RA improves adversarial robustness by 21.34% against AutoAttack and 18.78% against PGD-50 while boosting benign accuracy by 2.68%.