TY - JOUR
T1 - NeuroIV
T2 - Neuromorphic Vision Meets Intelligent Vehicle Towards Safe Driving with a New Database and Baseline Evaluations
AU - Chen, Guang
AU - Wang, Fa
AU - Li, Weijun
AU - Hong, Lin
AU - Conradt, Jorg
AU - Chen, Jieneng
AU - Zhang, Zhenyan
AU - Lu, Yiwen
AU - Knoll, Alois
N1 - Publisher Copyright:
© 2000-2011 IEEE.
PY - 2022/2/1
Y1 - 2022/2/1
N2 - Neuromorphic vision sensors such as the Dynamic and Active-pixel Vision Sensor (DAVIS) using silicon retina are inspired by biological vision, they generate streams of asynchronous events to indicate local log-intensity brightness changes. Their properties of high temporal resolution, low-bandwidth, lightweight computation, and low-latency make them a good fit for many applications of motion perception in the intelligent vehicle. However, as a younger and smaller research field compared to classical computer vision, neuromorphic vision is rarely connected with the intelligent vehicle. For this purpose, we present three novel datasets recorded with DAVIS sensors and depth sensor for the distracted driving research and focus on driver drowsiness detection, driver gaze-zone recognition, and driver hand-gesture recognition. To facilitate the comparison with classical computer vision, we record the RGB, depth and infrared data with a depth sensor simultaneously. The total volume of this dataset has 27360 samples. To unlock the potential of neuromorphic vision on the intelligent vehicle, we utilize three popular event-encoding methods to convert asynchronous event slices to event-frames and adapt state-of-the-art convolutional architectures to extensively evaluate their performances on this dataset. Together with qualitative and quantitative results, this work provides a new database and baseline evaluations named NeuroIV in cross-cutting areas of neuromorphic vision and intelligent vehicle.
AB - Neuromorphic vision sensors such as the Dynamic and Active-pixel Vision Sensor (DAVIS) using silicon retina are inspired by biological vision, they generate streams of asynchronous events to indicate local log-intensity brightness changes. Their properties of high temporal resolution, low-bandwidth, lightweight computation, and low-latency make them a good fit for many applications of motion perception in the intelligent vehicle. However, as a younger and smaller research field compared to classical computer vision, neuromorphic vision is rarely connected with the intelligent vehicle. For this purpose, we present three novel datasets recorded with DAVIS sensors and depth sensor for the distracted driving research and focus on driver drowsiness detection, driver gaze-zone recognition, and driver hand-gesture recognition. To facilitate the comparison with classical computer vision, we record the RGB, depth and infrared data with a depth sensor simultaneously. The total volume of this dataset has 27360 samples. To unlock the potential of neuromorphic vision on the intelligent vehicle, we utilize three popular event-encoding methods to convert asynchronous event slices to event-frames and adapt state-of-the-art convolutional architectures to extensively evaluate their performances on this dataset. Together with qualitative and quantitative results, this work provides a new database and baseline evaluations named NeuroIV in cross-cutting areas of neuromorphic vision and intelligent vehicle.
KW - Neuromorphic vision
KW - advanced driver assistance system
KW - database and baseline evaluations
KW - deep learning
KW - distracted driving
KW - event encoding
UR - http://www.scopus.com/inward/record.url?scp=85104099888&partnerID=8YFLogxK
U2 - 10.1109/TITS.2020.3022921
DO - 10.1109/TITS.2020.3022921
M3 - Article
AN - SCOPUS:85104099888
SN - 1524-9050
VL - 23
SP - 1171
EP - 1183
JO - IEEE Transactions on Intelligent Transportation Systems
JF - IEEE Transactions on Intelligent Transportation Systems
IS - 2
ER -