Towards Human-Level 3D Relative Pose Estimation: Generalizable, Training-Free, with Single Reference

Yuan Gao, Yajing Luo, Junhong Wang, Kui Jia, Gui-Song Xia

Published: 2024/6/26

Abstract

Humans can easily deduce the relative pose of a previously unseen object, without labeling or training, given only a single query-reference image pair. This is arguably achieved by incorporating i) 3D/2.5D shape perception from a single image, ii) render-and-compare simulation, and iii) rich semantic cue awareness to furnish (coarse) reference-query correspondence. Motivated by this, we propose a novel 3D generalizable relative pose estimation method by elaborating 3D/2.5D shape perception with a 2.5D shape from an RGB-D reference, fulfilling the render-and-compare paradigm with an off-the-shelf differentiable renderer, and leveraging the semantic cues from a pretrained model like DINOv2. Specifically, our differentiable renderer takes the 2.5D rotatable mesh textured by the RGB and the semantic maps (obtained by DINOv2 from the RGB input), then renders new RGB and semantic maps (with back-surface culling) under a novel rotated view. The refinement loss comes from comparing the rendered RGB and semantic maps with the query ones, back-propagating the gradients through the differentiable renderer to refine the 3D relative pose. As a result, \emph{our method can be readily applied to unseen objects, given only a single RGB-D reference, without labeling or training}. Extensive experiments on LineMOD, LM-O, and YCB-V show that our training-free method significantly outperforms the state-of-the-art supervised methods, especially under the rigorous \texttt{Acc@5/10/15}$^\circ$ metrics and the challenging cross-dataset settings.

Towards Human-Level 3D Relative Pose Estimation: Generalizable, Training-Free, with Single Reference | SummarXiv | SummarXiv