TY - GEN
T1 - Introducing A Framework for Single-Human Tracking Using Event-Based Cameras
AU - Eisl, Dominik
AU - Herzog, Fabian
AU - Dugelay, Jean Luc
AU - Apvrille, Ludovic
AU - Rigoll, Gerhard
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Event cameras generate data based on the amount of motion present in the captured scene, making them attractive sensors for solving object tracking tasks. In this paper, we present a framework for tracking humans using a single event camera which consists of three components. First, we train a Graph Neural Network (GNN) to recognize a person within the stream of events. Batches of events are represented as spatio-temporal graphs in order to preserve the sparse nature of events and retain their high temporal resolution. Subsequently, the person is localized in a weakly-supervised manner by adopting the well established method of Class Activation Maps (CAM) for our graph-based classification model. Our approach does not require the ground truth position of humans during training. Finally, a Kalman filter is deployed for tracking, which uses the predicted bounding box surrounding the human as measurement. We demonstrate that our approach achieves robust tracking results on test sequences from the Gait3 database, paving the way for further privacy-preserving methods in event-based human tracking. Code, pre-trained models and datasets of our research are publicly available.
AB - Event cameras generate data based on the amount of motion present in the captured scene, making them attractive sensors for solving object tracking tasks. In this paper, we present a framework for tracking humans using a single event camera which consists of three components. First, we train a Graph Neural Network (GNN) to recognize a person within the stream of events. Batches of events are represented as spatio-temporal graphs in order to preserve the sparse nature of events and retain their high temporal resolution. Subsequently, the person is localized in a weakly-supervised manner by adopting the well established method of Class Activation Maps (CAM) for our graph-based classification model. Our approach does not require the ground truth position of humans during training. Finally, a Kalman filter is deployed for tracking, which uses the predicted bounding box surrounding the human as measurement. We demonstrate that our approach achieves robust tracking results on test sequences from the Gait3 database, paving the way for further privacy-preserving methods in event-based human tracking. Code, pre-trained models and datasets of our research are publicly available.
KW - Event-based Cameras
KW - Human Tracking
KW - Kalman Filtering
UR - http://www.scopus.com/inward/record.url?scp=85180799809&partnerID=8YFLogxK
U2 - 10.1109/ICIP49359.2023.10222777
DO - 10.1109/ICIP49359.2023.10222777
M3 - Conference contribution
AN - SCOPUS:85180799809
T3 - Proceedings - International Conference on Image Processing, ICIP
SP - 3269
EP - 3273
BT - 2023 IEEE International Conference on Image Processing, ICIP 2023 - Proceedings
PB - IEEE Computer Society
T2 - 30th IEEE International Conference on Image Processing, ICIP 2023
Y2 - 8 October 2023 through 11 October 2023
ER -