Fusion-Based Feature Attention Gate Component for Vehicle Detection Based on Event Camera

Hu Cao, Guang Chen, Jiahao Xia, Genghang Zhuang, Alois Knoll

Research output: Contribution to journalArticlepeer-review

48 Scopus citations

Abstract

In the field of autonomous vehicles, various heterogeneous sensors, such as LiDAR, Radar, camera, etc, are combined to improve the vehicle ability of sensing accuracy and robustness. Multi-modal perception and learning has been proved to be an effective method to help vehicle understand the nature of complex environments. Event camera is a bio-inspired vision sensor that captures dynamic changes in the scene and filters out redundant information with high temporal resolution and high dynamic range. These characteristics of the event camera make it have a certain application potential in the field of autonomous vehicles. In this paper, we introduce a fully convolutional neural network with feature attention gate component (FAGC) for vehicle detection by combining frame-based and event-based vision. Both grayscale features and event features are fed into the feature attention gate component (FAGC) to generate the pixel-level attention feature coefficients to improve the feature discrimination ability of the network. Moreover, we explore the influence of different fusion strategies on the detection capability of the network. Experimental results demonstrate that our fusion method achieves the best detection accuracy and exceeds the accuracy of the method that only takes single-mode signal as input.

Original languageEnglish
Pages (from-to)24540-24548
Number of pages9
JournalIEEE Sensors Journal
Volume21
Issue number21
DOIs
StatePublished - 1 Nov 2021

Keywords

  • Vehicle detection
  • event camera
  • feature attention gate component (FAGC)
  • multi-modal fusion

Fingerprint

Dive into the research topics of 'Fusion-Based Feature Attention Gate Component for Vehicle Detection Based on Event Camera'. Together they form a unique fingerprint.

Cite this