DDP: Dual-Decoupled Prompting for Multi-Label Class-Incremental Learning

Kaile Du, Zihan Ye, Junzhou Xie, Fan Lyu, Yixi Shen, Yuyang Li, Miaoxuan Zhu, Fuyuan Hu, Ling Shao, Guangcan Liu

公開日: 2025/9/27

Abstract

Prompt-based methods have shown strong effectiveness in single-label class-incremental learning, but their direct extension to multi-label class-incremental learning (MLCIL) performs poorly due to two intrinsic challenges: semantic confusion from co-occurring categories and true-negative-false-positive confusion caused by partial labeling. We propose Dual-Decoupled Prompting (DDP), a replay-free and parameter-efficient framework that explicitly addresses both issues. DDP assigns class-specific positive-negative prompts to disentangle semantics and introduces Progressive Confidence Decoupling (PCD), a curriculum-inspired decoupling strategy that suppresses false positives. Past prompts are frozen as knowledge anchors, and interlayer prompting enhances efficiency. On MS-COCO and PASCAL VOC, DDP consistently outperforms prior methods and is the first replay-free MLCIL approach to exceed 80% mAP and 70% F1 under the standard MS-COCO B40-C10 benchmark.

DDP: Dual-Decoupled Prompting for Multi-Label Class-Incremental Learning | SummarXiv | SummarXiv