Improving lidar data evaluation for object detection and tracking using a priori knowledge and Sensorfusion

David Wittmann, Frederic Chucholowski, Markus Lienkamp

Publikation: Beitrag in Buch/Bericht/KonferenzbandKonferenzbeitragBegutachtung

12 Zitate (Scopus)

Abstract

This paper presents a new approach to improve lidar data evaluation on the basis of using a priori knowledge. In addition to the common I- and L-shapes, the directional IS-shape, the C-shape for pedestrians and the E-shape for bicycles are introduced. Considering the expected object shape and predicted position enables effective interpretation even of poor measurement values. Therefore a classification routine is utilized to distinguish between three classes (cars, bicycles, pedestrians). The tracking operation with Kalman filters is based on class specific dynamic models. The fusion of radar objects with the used a priori knowledge improves the quality of the lidar evaluation. Experiments with real measurement data showed good results even with a single layer lidar scanner.

OriginalspracheEnglisch
TitelICINCO 2014 - Proceedings of the 11th International Conference on Informatics in Control, Automation and Robotics
Redakteure/-innenJoaquim Filipe, Joaquim Filipe, Oleg Gusikhin, Kurosh Madani, Jurek Sasiadek
Herausgeber (Verlag)SciTePress
Seiten794-801
Seitenumfang8
ISBN (elektronisch)9789897580390
DOIs
PublikationsstatusVeröffentlicht - 2014
Veranstaltung11th International Conference on Informatics in Control, Automation and Robotics, ICINCO 2014 - Vienna, Österreich
Dauer: 1 Sept. 20143 Sept. 2014

Publikationsreihe

NameICINCO 2014 - Proceedings of the 11th International Conference on Informatics in Control, Automation and Robotics
Band1

Konferenz

Konferenz11th International Conference on Informatics in Control, Automation and Robotics, ICINCO 2014
Land/GebietÖsterreich
OrtVienna
Zeitraum1/09/143/09/14

Fingerprint

Untersuchen Sie die Forschungsthemen von „Improving lidar data evaluation for object detection and tracking using a priori knowledge and Sensorfusion“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren