TY - JOUR
T1 - Generation of ground truth datasets for the analysis of 3D point clouds in urban scenes acquired via different sensors
AU - Xu, Y.
AU - Sun, Z.
AU - Boerner, R.
AU - Koch, T.
AU - Hoegner, L.
AU - Stilla, U.
N1 - Publisher Copyright:
© Authors 2018. CC BY 4.0 License.
PY - 2018/4/30
Y1 - 2018/4/30
N2 - In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.
AB - In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.
KW - 3D space labeling
KW - Different sensors
KW - Multi-resolution voxel structure
KW - Point clouds
UR - http://www.scopus.com/inward/record.url?scp=85046977749&partnerID=8YFLogxK
U2 - 10.5194/isprs-archives-XLII-3-2009-2018
DO - 10.5194/isprs-archives-XLII-3-2009-2018
M3 - Conference article
AN - SCOPUS:85046977749
SN - 1682-1750
VL - 42
SP - 2009
EP - 2015
JO - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives
JF - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives
IS - 3
T2 - 2018 ISPRS TC III Mid-Term Symposium on Developments, Technologies and Applications in Remote Sensing
Y2 - 7 May 2018 through 10 May 2018
ER -