Can VLM Pseudo-Labels Train a Time-Series QA Model That Outperforms the VLM?

Takuya Fujimura, Kota Dohi, Natsuo Yamashita, Yohei Kawaguchi

Published: 2025/9/30

Abstract

Time-series question answering (TSQA) tasks face significant challenges due to the lack of labeled data. Alternatively, with recent advancements in large-scale models, vision-language models (VLMs) have demonstrated the potential to analyze time-series signals in a zero-shot manner. In this paper, we propose a training approach that uses pseudo labels generated by a VLM. Although VLMs can produce incorrect labels, TSQA models can still be effectively trained based on the property that deep neural networks are inherently robust to such noisy labels. Our experimental results demonstrate that TSQA models are not only successfully trained with pseudo labels, but also surpass the performance of the VLM itself by leveraging a large amount of unlabeled data.

Can VLM Pseudo-Labels Train a Time-Series QA Model That Outperforms the VLM? | SummarXiv | SummarXiv