DiffComplete: Diffusion-based Generative 3D Shape Completion

Ruihang Chu, Enze Xie, Shentong Mo, Zhenguo Li, Matthias Nießner, Chi Wing Fu, Jiaya Jia

Publikation: Beitrag in FachzeitschriftKonferenzartikelBegutachtung

1 Zitat (Scopus)

Abstract

We introduce a new diffusion-based approach for shape completion on 3D range scans. Compared with prior deterministic and probabilistic methods, we strike a balance between realism, multi-modality, and high fidelity. We propose DiffComplete by casting shape completion as a generative task conditioned on the incomplete shape. Our key designs are two-fold. First, we devise a hierarchical feature aggregation mechanism to inject conditional features in a spatially-consistent manner. So, we can capture both local details and broader contexts of the conditional inputs to control the shape completion. Second, we propose an occupancy-aware fusion strategy in our model to enable the completion of multiple partial shapes and introduce higher flexibility on the input conditions. DiffComplete sets a new SOTA performance (e.g., 40% decrease on l1 error) on two large-scale 3D shape completion benchmarks. Our completed shapes not only have a realistic outlook compared with the deterministic methods but also exhibit high similarity to the ground truths compared with the probabilistic alternatives. Further, DiffComplete has strong generalizability on objects of entirely unseen classes for both synthetic and real data, eliminating the need for model re-training in various applications.

OriginalspracheEnglisch
FachzeitschriftAdvances in Neural Information Processing Systems
Jahrgang36
PublikationsstatusVeröffentlicht - 2023
Veranstaltung37th Conference on Neural Information Processing Systems, NeurIPS 2023 - New Orleans, USA/Vereinigte Staaten
Dauer: 10 Dez. 202316 Dez. 2023

Fingerprint

Untersuchen Sie die Forschungsthemen von „DiffComplete: Diffusion-based Generative 3D Shape Completion“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren