The double sphere camera model

Vladyslav Usenko, Nikolaus Demmel, Daniel Cremers

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

66 Scopus citations

Abstract

Vision-based motion estimation and 3D reconstruction, which have numerous applications (e.g., autonomous driving, navigation systems for airborne devices and augmented reality) are receiving significant research attention. To increase the accuracy and robustness, several researchers have recently demonstrated the benefit of using large field-of-view cameras for such applications. In this paper, we provide an extensive review of existing models for large field-of-view cameras. For each model we provide projection and unprojection functions and the subspace of points that result in valid projection. Then, we propose the Double Sphere camera model that well fits with large field-of-view lenses, is computationally inexpensive and has a closed-form inverse. We evaluate the model using a calibration dataset with several different lenses and compare the models using the metrics that are relevant for Visual Odometry, i.e., reprojection error, as well as computation time for projection and unprojection functions and their Jacobians. We also provide qualitative results and discuss the performance of all models.

Original languageEnglish
Title of host publicationProceedings - 2018 International Conference on 3D Vision, 3DV 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages552-560
Number of pages9
ISBN (Electronic)9781538684252
DOIs
StatePublished - 12 Oct 2018
Event6th International Conference on 3D Vision, 3DV 2018 - Verona, Italy
Duration: 5 Sep 20188 Sep 2018

Publication series

NameProceedings - 2018 International Conference on 3D Vision, 3DV 2018

Conference

Conference6th International Conference on 3D Vision, 3DV 2018
Country/TerritoryItaly
CityVerona
Period5/09/188/09/18

Keywords

  • Camera
  • Model
  • Projection

Fingerprint

Dive into the research topics of 'The double sphere camera model'. Together they form a unique fingerprint.

Cite this