DeepDeform: Learning Non-Rigid RGB-D Reconstruction with Semi-Supervised Data

Aljaz Bozic, Michael Zollhofer, Christian Theobalt, Matthias Niebner

Research output: Contribution to journalConference articlepeer-review

64 Scopus citations

Abstract

Applying data-driven approaches to non-rigid 3D reconstruction has been difficult, which we believe can be attributed to the lack of a large-scale training corpus. Unfortunately, this method fails for important cases such as highly non-rigid deformations. We first address this problem of lack of data by introducing a novel semi-supervised strategy to obtain dense inter-frame correspondences from a sparse set of annotations. This way, we obtain a large dataset of 400 scenes, over 390,000 RGB-D frames, and 5,533 densely aligned frame pairs; in addition, we provide a test set along with several metrics for evaluation. Based on this corpus, we introduce a data-driven non-rigid feature matching approach, which we integrate into an optimization-based reconstruction pipeline. Here, we propose a new neural network that operates on RGB-D frames, while maintaining robustness under large non-rigid deformations and producing accurate predictions. Our approach significantly outperforms existing non-rigid reconstruction methods that do not use learned data terms, as well as learning-based approaches that only use self-supervision.

Original languageEnglish
Article number9156355
Pages (from-to)7000-7010
Number of pages11
JournalProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
DOIs
StatePublished - 2020
Event2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020 - Virtual, Online, United States
Duration: 14 Jun 202019 Jun 2020

Fingerprint

Dive into the research topics of 'DeepDeform: Learning Non-Rigid RGB-D Reconstruction with Semi-Supervised Data'. Together they form a unique fingerprint.

Cite this