Rethinking Human-Object Interaction Evaluation for both Vision-Language Models and HOI-Specific Methods
Qinqian Lei, Bo Wang, Robby T. Tan
公開日: 2025/8/26
Abstract
Human-object interaction (HOI) detection has traditionally been approached with task-specific models, sometimes augmented by early vision-language models (VLMs) such as CLIP. With the rise of large, generative VLMs, however, a natural question emerges: can standalone VLMs effectively perform HOI detection, and how do they compare to specialized HOI methods? Addressing this requires a benchmarking dataset and protocol that support both paradigms. Existing benchmarks such as HICO-DET were developed before modern VLMs and rely on exact label matching. This clashes with generative outputs, which may yield multiple equally valid interpretations. For example, in a single image, a person mid-motion with a frisbee might plausibly be described as 'throwing' or 'catching', yet only one is annotated as correct. Such rigid evaluation penalizes valid predictions from both VLMs and HOI-specific methods, but disproportionately underestimates VLM performance because their outputs are less constrained. We introduce a new benchmarking dataset that reformulates HOI detection as a multiple-answer multiple-choice task. It emphasizes challenging scenarios by (i) including a higher proportion of multi-person scenes where individuals perform different interactions, (ii) removing overly simple cases, and (iii) curating hard negative choices. This makes the benchmark more challenging than prior HOI datasets, while still supporting systematic evaluation of both standalone VLMs and HOI-specific methods under a unified protocol. Our results show that large VLMs already surpass state-of-the-art HOI-specific methods across most metrics, while analysis further uncovers key limitations: VLMs often misattribute surrounding people's interactions to the target person and struggle in complex multi-person or occluded scenarios.