An Online Self-Correcting Calibration Architecture for Multi-Camera Traffic Localization Infrastructure

Leah Strand, Marcel Bruckner, Venkatnarayanan Lakshminarasimhan, Alois Knoll

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Most vision-based sensing and localization infrastructure today employ conventional area scanning cameras due to the high information density and cost efficiency offered by them. While the information-rich two-dimensional images provided by such sensors make it easier to detect and classify traffic objects with the help of deep neural networks, their accurate localization in the three-dimensional real world also calls for a reliable calibration methodology, that maintains accuracy not just during installation, but also under continuous operation over time. In this paper, we propose a camera calibration architecture that extracts and uses corresponding targets from high definition maps, augment it with an efficient stabilization mechanism in order to compensate for the errors arising out of fast transient vibrations and slow orientational drifts. Finally, we evaluate its performance on a real-world test site.

Original languageEnglish
Title of host publication35th IEEE Intelligent Vehicles Symposium, IV 2024
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1666-1671
Number of pages6
ISBN (Electronic)9798350348811
DOIs
StatePublished - 2024
Event35th IEEE Intelligent Vehicles Symposium, IV 2024 - Jeju Island, Korea, Republic of
Duration: 2 Jun 20245 Jun 2024

Publication series

NameIEEE Intelligent Vehicles Symposium, Proceedings
ISSN (Print)1931-0587
ISSN (Electronic)2642-7214

Conference

Conference35th IEEE Intelligent Vehicles Symposium, IV 2024
Country/TerritoryKorea, Republic of
CityJeju Island
Period2/06/245/06/24

Fingerprint

Dive into the research topics of 'An Online Self-Correcting Calibration Architecture for Multi-Camera Traffic Localization Infrastructure'. Together they form a unique fingerprint.

Cite this