Abstract
In this work, we investigate the mutual impact of two kinds of fusion, namely multi-modal sensor fusion and multi-modal low-level feature fusion to the problem of pedestrian detection in heterogeneous sensor networks. More specifically we use an Adaboost based person recognition system, and extend it, to simultaneously use data from visual color images and thermal infrared images. Additionally, heterogeneous types of features (i.e. the well established Haar features and the relatively new Edgelet features) are simultaneously used in a joint framework. With this setup and using publicly available data, we evaluate the mutual impact of multiple modalities and multiple feature types. We are able to show which combinations of features and sensor modalities outperform other configurations. This shows the gain of fusion, and it gives insights on how to set up a fusion scheme for the purpose of pedestrian detection.
| Original language | English |
|---|---|
| Journal | International Workshop on Image Analysis for Multimedia Interactive Services |
| State | Published - 2011 |
| Event | 12th International Workshop on Image Analysis for Multimedia Interactive Services, WIAMIS 2011 - Delft, Netherlands Duration: 13 Apr 2011 → 15 Apr 2011 |