DPDETR: Decoupled Position Detection Transformer for Infrared-Visible Object Detection

Junjie Guo, Chenqiang Gao, Fangcen Liu, Deyu Meng

公開日: 2024/8/12

Abstract

Infrared-visible object detection aims to achieve robust object detection by leveraging the complementary information of infrared and visible image pairs. However, the commonly existing modality misalignment problem presents two challenges: fusing misalignment complementary features is difficult, and current methods cannot reliably locate objects in both modalities under misalignment conditions. In this paper, we propose a Decoupled Position Detection Transformer (DPDETR) to address these issues. Specifically, we explicitly define the object category, visible modality position, and infrared modality position to enable the network to learn the intrinsic relationships and output reliably positions of objects in both modalities. To fuse misaligned object features reliably, we propose a Decoupled Position Multispectral Cross-attention module that adaptively samples and aggregates multispectral complementary features with the constraint of infrared and visible reference positions. Additionally, we design a query-decoupled Multispectral Decoder structure to address the the conflict in feature focus among the three kinds of object information in our task and propose a Decoupled Position Contrastive DeNoising Training strategy to enhance the DPDETR's ability to learn decoupled positions. Experiments on DroneVehicle and KAIST datasets demonstrate significant improvements compared to other state-of-the-art methods. The code will be released at https://github.com/gjj45/DPDETR

DPDETR: Decoupled Position Detection Transformer for Infrared-Visible Object Detection | SummarXiv | SummarXiv