TY - JOUR
T1 - 3DLite
T2 - ACM SIGGRAPH Asia Conference, SA 2017
AU - Huang, Jingwei
AU - Dai, Angela
AU - Guibas, Leonidas
AU - Niessner, Matthias
N1 - Publisher Copyright:
© 2017 Association for Computing Machinery.
PY - 2017/11/20
Y1 - 2017/11/20
N2 - We present 3DLite1, a novel approach to reconstruct 3D environments using consumer RGB-D sensors, making a step towards directly utilizing captured 3D content in graphics applications, such as video games, VR, or AR. Rather than reconstructing an accurate one-to-one representation of the real world, our method computes a lightweight, low-polygonal geometric abstraction of the scanned geometry. We argue that for many graphics applications it is much more important to obtain high-quality surface textures rather than highly-detailed geometry. To this end, we compensate for motion blur, auto-exposure artifacts, and micro-misalignments in camera poses by warping and stitching image fragments from low-quality RGB input data to achieve high-resolution, sharp surface textures. In addition to the observed regions of a scene, we extrapolate the scene geometry, as well as the mapped surface textures, to obtain a complete 3D model of the environment. We show that a simple planar abstraction of the scene geometry is ideally suited for this completion task, enabling 3DLite to produce complete, lightweight, and visually compelling 3D scene models. We believe that these CAD-like reconstructions are an important step towards leveraging RGB-D scanning in actual content creation pipelines.
AB - We present 3DLite1, a novel approach to reconstruct 3D environments using consumer RGB-D sensors, making a step towards directly utilizing captured 3D content in graphics applications, such as video games, VR, or AR. Rather than reconstructing an accurate one-to-one representation of the real world, our method computes a lightweight, low-polygonal geometric abstraction of the scanned geometry. We argue that for many graphics applications it is much more important to obtain high-quality surface textures rather than highly-detailed geometry. To this end, we compensate for motion blur, auto-exposure artifacts, and micro-misalignments in camera poses by warping and stitching image fragments from low-quality RGB input data to achieve high-resolution, sharp surface textures. In addition to the observed regions of a scene, we extrapolate the scene geometry, as well as the mapped surface textures, to obtain a complete 3D model of the environment. We show that a simple planar abstraction of the scene geometry is ideally suited for this completion task, enabling 3DLite to produce complete, lightweight, and visually compelling 3D scene models. We believe that these CAD-like reconstructions are an important step towards leveraging RGB-D scanning in actual content creation pipelines.
KW - RGB-D
KW - Scan
KW - Texture mapping
UR - http://www.scopus.com/inward/record.url?scp=85038958877&partnerID=8YFLogxK
U2 - 10.1145/3130800.3130824
DO - 10.1145/3130800.3130824
M3 - Conference article
AN - SCOPUS:85038958877
SN - 0730-0301
VL - 36
JO - ACM Transactions on Graphics
JF - ACM Transactions on Graphics
IS - 6
M1 - a203
Y2 - 27 November 2017 through 30 November 2017
ER -