The TUM Gait from Audio, Image and Depth (GAID) database: Multimodal recognition of subjects and traits

Martin Hofmann, Jürgen Geiger, Sebastian Bachmann, Björn Schuller, Gerhard Rigoll

Publikation: Beitrag in FachzeitschriftArtikelBegutachtung

165 Zitate (Scopus)

Abstract

Recognizing people by the way they walk-also known as gait recognition-has been studied extensively in the recent past. Recent gait recognition methods solely focus on data extracted from an RGB video stream. With this work, we provide a means for multimodal gait recognition, by introducing the freely available TUM Gait from Audio, Image and Depth (GAID) database. This database simultaneously contains RGB video, depth and audio. With 305 people in three variations, it is one of the largest to-date. To further investigate challenges of time variation, a subset of 32 people is recorded a second time. We define standardized experimental setups for both person identification and for the assessment of the soft biometrics age, gender, height, and shoe type. For all defined experiments, we present several baseline results on all available modalities. These effectively demonstrate multimodal fusion being beneficial to gait recognition.

OriginalspracheEnglisch
Seiten (von - bis)195-206
Seitenumfang12
FachzeitschriftJournal of Visual Communication and Image Representation
Jahrgang25
Ausgabenummer1
DOIs
PublikationsstatusVeröffentlicht - Jan. 2014

Fingerprint

Untersuchen Sie die Forschungsthemen von „The TUM Gait from Audio, Image and Depth (GAID) database: Multimodal recognition of subjects and traits“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren