The System Description of CPS Team for Track on Driving with Language of CVPR 2024 Autonomous Grand Challenge

Jinghan Peng, Jingwen Wang, Xing Yu, Dehui Du

公開日: 2025/9/14

Abstract

This report outlines our approach using vision language model systems for the Driving with Language track of the CVPR 2024 Autonomous Grand Challenge. We have exclusively utilized the DriveLM-nuScenes dataset for training our models. Our systems are built on the LLaVA models, which we enhanced through fine-tuning with the LoRA and DoRA methods. Additionally, we have integrated depth information from open-source depth estimation models to enrich the training and inference processes. For inference, particularly with multiple-choice and yes/no questions, we adopted a Chain-of-Thought reasoning approach to improve the accuracy of the results. This comprehensive methodology enabled us to achieve a top score of 0.7799 on the validation set leaderboard, ranking 1st on the leaderboard.

The System Description of CPS Team for Track on Driving with Language of CVPR 2024 Autonomous Grand Challenge | SummarXiv | SummarXiv