Traj-MLLM: Can Multimodal Large Language Models Reform Trajectory Data Mining?

Shuo Liu, Di Yao, Yan Lin, Gao Cong, Jingping Bi

Published: 2025/8/25

Abstract

Building a general model capable of analyzing human trajectories across different geographic regions and different tasks becomes an emergent yet important problem for various applications. However, existing works suffer from the generalization problem, \ie, they are either restricted to train for specific regions or only suitable for a few tasks. Given the recent advances of multimodal large language models (MLLMs), we raise the question: can MLLMs reform current trajectory data mining and solve the problem? Nevertheless, due to the modality gap of trajectory, how to generate task-independent multimodal trajectory representations and how to adapt flexibly to different tasks remain the foundational challenges. In this paper, we propose \texttt{Traj-MLLM}}, which is the first general framework using MLLMs for trajectory data mining. By integrating multiview contexts, \texttt{Traj-MLLM}} transforms raw trajectories into interleaved image-text sequences while preserving key spatial-temporal characteristics, and directly utilizes the reasoning ability of MLLMs for trajectory analysis. Additionally, a prompt optimization method is proposed to finalize data-invariant prompts for task adaptation. Extensive experiments on four publicly available datasets show that \texttt{Traj-MLLM}} outperforms state-of-the-art baselines by $48.05\%$, $15.52\%$, $51.52\%$, $1.83\%$ on travel time estimation, mobility prediction, anomaly detection and transportation mode identification, respectively. \texttt{Traj-MLLM}} achieves these superior performances without requiring any training data or fine-tuning the MLLM backbones.

Traj-MLLM: Can Multimodal Large Language Models Reform Trajectory Data Mining? | SummarXiv | SummarXiv