TY - GEN
T1 - Parsing geometry using structure-aware shape templates
AU - Ganapathi-Subramanian, Vignesh
AU - Diamanti, Olga
AU - Pirk, Soeren
AU - Tang, Chengcheng
AU - Niessner, Matthias
AU - Guibas, Leonidas
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/10/12
Y1 - 2018/10/12
N2 - Real-life man-made objects often exhibit strong and easily-identifiable structure, as a direct result of their design or their intended functionality. Structure typically appears in the form of individual parts and their arrangement. Knowing about object structure can be an important cue for object recognition and scene understanding - a key goal for various AR and robotics applications. However, commodity RGB-D sensors used in these scenarios only produce raw, unorganized point clouds, without structural information about the captured scene. Moreover, the generated data is commonly partial and susceptible to artifacts and noise, which makes inferring the structure of scanned objects challenging. In this paper, we organize large shape collections into parameterized shape templates to capture the underlying structure of the objects. The templates allow us to transfer the structural information onto new objects and incomplete scans. We employ a deep neural network that matches the partial scan with one of the shape templates, then match and fit it to complete and detailed models from the collection. This allows us to faithfully label its parts and to guide the reconstruction of the scanned object. We showcase the effectiveness of our method by comparing it to other state-of-the-art approaches.
AB - Real-life man-made objects often exhibit strong and easily-identifiable structure, as a direct result of their design or their intended functionality. Structure typically appears in the form of individual parts and their arrangement. Knowing about object structure can be an important cue for object recognition and scene understanding - a key goal for various AR and robotics applications. However, commodity RGB-D sensors used in these scenarios only produce raw, unorganized point clouds, without structural information about the captured scene. Moreover, the generated data is commonly partial and susceptible to artifacts and noise, which makes inferring the structure of scanned objects challenging. In this paper, we organize large shape collections into parameterized shape templates to capture the underlying structure of the objects. The templates allow us to transfer the structural information onto new objects and incomplete scans. We employ a deep neural network that matches the partial scan with one of the shape templates, then match and fit it to complete and detailed models from the collection. This allows us to faithfully label its parts and to guide the reconstruction of the scanned object. We showcase the effectiveness of our method by comparing it to other state-of-the-art approaches.
KW - Partial Shape Recovery
KW - Shape Primitives
KW - Shape Reconstruction
KW - Shape Templates
KW - Template Fitting
UR - http://www.scopus.com/inward/record.url?scp=85056790125&partnerID=8YFLogxK
U2 - 10.1109/3DV.2018.00082
DO - 10.1109/3DV.2018.00082
M3 - Conference contribution
AN - SCOPUS:85056790125
T3 - Proceedings - 2018 International Conference on 3D Vision, 3DV 2018
SP - 672
EP - 681
BT - Proceedings - 2018 International Conference on 3D Vision, 3DV 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 6th International Conference on 3D Vision, 3DV 2018
Y2 - 5 September 2018 through 8 September 2018
ER -