Abstract
In this paper we present a volumetric method for the 3-D reconstruction of real world objects from multiple calibrated camera views. The representation of the objects is fully volume-based and no explicit surface description is needed. The approach is based on multi-hypothesis tests of the voxel model back-projected into the image planes. All camera views are incorporated in the reconstruction process simultaneously and no explicit data fusion is needed. In a first step each voxel of the viewing volume is filled with several color hypotheses originating from different camera views. This leads to an overcomplete representation of the 3-D object and each voxel typically contains multiple hypotheses. In a second step only those hypotheses remain in the voxels which are consistent with all camera views where the voxel is visible. Voxels without a valid hypothesis are considered to be transparent. The methodology of our approach combines the advantages of silhouette-based and image feature-based methods. Experimental results on real and synthetic image data show the excellent visual quality of the voxel-based 3-D reconstruction.
Original language | English |
---|---|
Pages (from-to) | 3509-3515 |
Number of pages | 7 |
Journal | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings |
Volume | 6 |
DOIs | |
State | Published - 1999 |
Externally published | Yes |
Event | Proceedings of the 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP-99) - Phoenix, AZ, USA Duration: 15 Mar 1999 → 19 Mar 1999 |