IntrinsicReal: Adapting IntrinsicAnything from Synthetic to Real Objects
Xiaokang Wei, Zizheng Yan, Zhangyang Xiong, Yiming Hao, Yipeng Qin, Xiaoguang Han
Published: 2025/8/31
Abstract
Estimating albedo (a.k.a., intrinsic image decomposition) from single RGB images captured in real-world environments (e.g., the MVImgNet dataset) presents a significant challenge due to the absence of paired images and their ground truth albedos. Therefore, while recent methods (e.g., IntrinsicAnything) have achieved breakthroughs by harnessing powerful diffusion priors, they remain predominantly trained on large-scale synthetic datasets (e.g., Objaverse) and applied directly to real-world RGB images, which ignores the large domain gap between synthetic and real-world data and leads to suboptimal generalization performance. In this work, we address this gap by proposing IntrinsicReal, a novel domain adaptation framework that bridges the above-mentioned domain gap for real-world intrinsic image decomposition. Specifically, our IntrinsicReal adapts IntrinsicAnything to the real domain by fine-tuning it using its high-quality output albedos selected by a novel dual pseudo-labeling strategy: i) pseudo-labeling with an absolute confidence threshold on classifier predictions, and ii) pseudo-labeling using the relative preference ranking of classifier predictions for individual input objects. This strategy is inspired by human evaluation, where identifying the highest-quality outputs is straightforward, but absolute scores become less reliable for sub-optimal cases. In these situations, relative comparisons of outputs become more accurate. To implement this, we propose a novel two-phase pipeline that sequentially applies these pseudo-labeling techniques to effectively adapt IntrinsicAnything to the real domain. Experimental results show that our IntrinsicReal significantly outperforms existing methods, achieving state-of-the-art results for albedo estimation on both synthetic and real-world datasets.