People detection by fusion of multimodal features from multimodal sensors

Martin Hofmann, Yan Li, Gerhard Rigoll

Research output: Contribution to journalConference articlepeer-review

Abstract

In this work, we investigate the mutual impact of two kinds of fusion, namely multi-modal sensor fusion and multi-modal low-level feature fusion to the problem of pedestrian detection in heterogeneous sensor networks. More specifically we use an Adaboost based person recognition system, and extend it, to simultaneously use data from visual color images and thermal infrared images. Additionally, heterogeneous types of features (i.e. the well established Haar features and the relatively new Edgelet features) are simultaneously used in a joint framework. With this setup and using publicly available data, we evaluate the mutual impact of multiple modalities and multiple feature types. We are able to show which combinations of features and sensor modalities outperform other configurations. This shows the gain of fusion, and it gives insights on how to set up a fusion scheme for the purpose of pedestrian detection.

Original languageEnglish
JournalInternational Workshop on Image Analysis for Multimedia Interactive Services
StatePublished - 2011
Event12th International Workshop on Image Analysis for Multimedia Interactive Services, WIAMIS 2011 - Delft, Netherlands
Duration: 13 Apr 201115 Apr 2011

Fingerprint

Dive into the research topics of 'People detection by fusion of multimodal features from multimodal sensors'. Together they form a unique fingerprint.

Cite this