State of the art on 3D reconstruction with RGB-D cameras

Michael Zollhöfer, Patrick Stotko, Andreas Görlitz, Christian Theobalt, Matthias Nießner, Reinhard Klein, Andreas Kolb

Research output: Contribution to journalArticlepeer-review

272 Scopus citations

Abstract

The advent of affordable consumer grade RGB-D cameras has brought about a profound advancement of visual scene reconstruction methods. Both computer graphics and computer vision researchers spend significant effort to develop entirely new algorithms to capture comprehensive shape models of static and dynamic scenes with RGB-D cameras. This led to significant advances of the state of the art along several dimensions. Some methods achieve very high reconstruction detail, despite limited sensor resolution. Others even achieve real-time performance, yet possibly at lower quality. New concepts were developed to capture scenes at larger spatial and temporal extent. Other recent algorithms flank shape reconstruction with concurrent material and lighting estimation, even in general scenes and unconstrained conditions. In this state-of-the-art report, we analyze these recent developments in RGB-D scene reconstruction in detail and review essential related work. We explain, compare, and critically analyze the common underlying algorithmic concepts that enabled these recent advancements. Furthermore, we show how algorithms are designed to best exploit the benefits of RGB-D data while suppressing their often non-trivial data distortions. In addition, this report identifies and discusses important open research questions and suggests relevant directions for future work.

Original languageEnglish
Pages (from-to)625-652
Number of pages28
JournalComputer Graphics Forum
Volume37
Issue number2
DOIs
StatePublished - 2018

Keywords

  • Appearance and texture representations
  • Computing methodologies
  • Motion capture
  • Reconstruction

Fingerprint

Dive into the research topics of 'State of the art on 3D reconstruction with RGB-D cameras'. Together they form a unique fingerprint.

Cite this