SAM2-ELNet: Label Enhancement and Automatic Annotation for Remote Sensing Segmentation
Jianhao Yang, Wenshuo Yu, Yuanchao Lv, Jiance Sun, Bokang Sun, Mingyang Liu
公開日: 2025/3/16
Abstract
Remote sensing image segmentation is crucial for environmental monitoring, disaster assessment, and resource management, but its performance largely depends on the quality of the dataset. Although several high-quality datasets are broadly accessible, data scarcity remains for specialized tasks like marine oil spill segmentation. Such tasks still rely on manual annotation, which is both time-consuming and influenced by subjective human factors. The segment anything model 2 (SAM2) has strong potential as an automatic annotation framework but struggles to perform effectively on heterogeneous, low-contrast remote sensing imagery. To address these challenges, we introduce a novel label enhancement and automatic annotation framework, termed SAM2-ELNet (Enhancement and Labeling Network). Specifically, we employ the frozen Hiera backbone from the pretrained SAM2 as the encoder, while fine-tuning the adapter and decoder for different remote sensing tasks. In addition, the proposed framework includes a label quality evaluator for filtering, ensuring the reliability of the generated labels. We design a series of experiments targeting resource-limited remote sensing tasks and evaluate our method on two datasets: the Deep-SAR Oil Spill (SOS) dataset with Synthetic Aperture Radar (SAR) imagery, and the CHN6-CUG Road dataset with Very High Resolution (VHR) optical imagery. The proposed framework can enhance coarse annotations and generate reliable training data under resource-limited conditions. Fine-tuned on only 30% of the training data, it generates automatically labeled data. A model trained solely on these achieves slightly lower performance than using the full original annotations, while greatly reducing labeling costs and offering a practical solution for large-scale remote sensing interpretation.