DiffCAD: Weakly-Supervised Probabilistic CAD Model Retrieval and Alignment from an RGB Image

Daoyi Gao, David Rozenberszki, Stefan Leutenegger, Angela Dai

Publikation: Beitrag in FachzeitschriftArtikelBegutachtung

Abstract

Perceiving 3D structures from RGB images based on CAD model primitives can enable an effective, efficient 3D object-based representation of scenes. However, current approaches rely on supervision from expensive yet imperfect annotations of CAD models associated with real images, and encounter challenges due to the inherent ambiguities in the task - both in depth-scale ambiguity in monocular perception, as well as inexact matches of CAD database models to real observations. We thus propose DiffCAD, the first weakly-supervised probabilistic approach to CAD retrieval and alignment from an RGB image. We learn a probabilistic model through diffusion, modeling likely distributions of shape, pose, and scale of CAD objects in an image. This enables multi-hypothesis generation of different plausible CAD reconstructions, requiring only a few hypotheses to characterize ambiguities in depth/scale and inexact shape matches. Our approach is trained only on synthetic data, leveraging monocular depth and mask estimates to enable robust zero-shot adaptation to various real target domains. Despite being trained solely on synthetic data, our multi-hypothesis approach can even surpass the supervised state-of-the-art on the Scan2CAD dataset by 5.9% with 8 hypotheses.

OriginalspracheEnglisch
Aufsatznummer106
FachzeitschriftACM Transactions on Graphics
Jahrgang43
Ausgabenummer4
DOIs
PublikationsstatusVeröffentlicht - 19 Juli 2024

Fingerprint

Untersuchen Sie die Forschungsthemen von „DiffCAD: Weakly-Supervised Probabilistic CAD Model Retrieval and Alignment from an RGB Image“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren