SC-Diff: 3D Shape Completion with Latent Diffusion Models
Simon Schaefer, Juan D. Galvis, Xingxing Zuo, Stefan Leutengger
公開日: 2024/3/19
Abstract
We present a novel 3D shape completion framework that unifies multimodal conditioning, leveraging both 2D images and 3D partial scans through a latent diffusion model. Shapes are represented as Truncated Signed Distance Functions (TSDFs) and encoded into a discrete latent space jointly supervised by 2D and 3D cues, enabling efficient high-resolution processing while reducing GPU memory usage by 30\% compared to state-of-the-art methods. Our approach guides the generation process with flexible multimodal conditioning, ensuring consistent integration of 2D and 3D information from encoding to reconstruction. Our training strategy simulates realistic partial observations, avoiding assumptions about input structure and improving robustness in real-world scenarios. Leveraging our efficient latent space and multimodal conditioning, our model generalizes across object categories, outperforming class-specific models by 12\% and class-agnostic models by 47\% in $l_1$ reconstruction error, while producing more diverse, realistic, and high-fidelity completions than prior approaches.