Pre-Manipulation Alignment Prediction with Parallel Deep State-Space and Transformer Models

Motonari Kambara, Komei Sugiura

公開日: 2025/9/17

Abstract

In this work, we address the problem of predicting the future success of open-vocabulary object manipulation tasks. Conventional approaches typically determine success or failure after the action has been carried out. However, they make it difficult to prevent potential hazards and rely on failures to trigger replanning, thereby reducing the efficiency of object manipulation sequences. To overcome these challenges, we propose a model, which predicts the alignment between a pre-manipulation egocentric image with the planned trajectory and a given natural language instruction. We introduce a Multi-Level Trajectory Fusion module, which employs a state-of-the-art deep state-space model and a transformer encoder in parallel to capture multi-level time-series self-correlation within the end effector trajectory. Our experimental results indicate that the proposed method outperformed existing methods, including foundation models.

Pre-Manipulation Alignment Prediction with Parallel Deep State-Space and Transformer Models | SummarXiv | SummarXiv