LvBench: A Benchmark for Long-form Video Understanding with Versatile Multi-modal Question Answering
Hongjie Zhang, Lu Dong, Yi Liu, Yifei Huang, Yali Wang, Limin Wang, Yu Qiao
公開日: 2023/12/8
Abstract
Despite remarkable recent progress, existing long-form VideoQA datasets fall short of meeting the criteria for genuine long-form video understanding. This is primarily due to the use of short videos for question curation, and the reliance on limited-length sub-clips as clues to answer those questions. Meanwhile, previous datasets have limited focus on question type and modality. To remedy this, we introduce LvBench, a Long-form video understanding benchmark for versatile multi-modal question-answering. Our LvBench stands out from existing long-form VideoQA datasets through three key characteristics: 1) Extended temporal durations: We consider videos ranging from 70 seconds to 4 hours, covering single-scene, multi-scene, and full-scene contexts. This design accounts for both video and clue lengths, capturing diverse contextual dynamics. 2) Diverse question types and modalities: LvBench introduces six distinct question types that evaluate various perceptual and cognitive capabilities, utilizing both video frames and subtitles. 3) High-quality annotations: We employ rigorous manual labeling by human annotators. Our dataset comprises 20,061 question-answer pairs sourced from 100 carefully selected movies across diverse genres, annotated collaboratively by multiple individuals. Analysis involving various baselines reveals a consistent trend: the performance of all existing methods significantly deteriorates when video and clue length increases. We expect LvBench to serve as a valuable resource for future works on long-form video understanding.