TY - GEN
T1 - Comparative Analysis of CNN-Based Spatiotemporal Reasoning in Videos
AU - Köpüklü, Okan
AU - Herzog, Fabian
AU - Rigoll, Gerhard
N1 - Publisher Copyright:
© 2021, Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - Understanding actions and gestures in video streams requires temporal reasoning of the spatial content from different time instants, i.e., spatiotemporal (ST) modeling. In this survey paper, we have made a comparative analysis of different ST modeling techniques for action and gesture recognition tasks. Since Convolutional Neural Networks (CNNs) are proved to be an effective tool as a feature extractor for static images, we apply ST modeling techniques on the features of static images from different time instants extracted by CNNs. All techniques are trained end-to-end together with a CNN feature extraction part and evaluated on two publicly available benchmarks: The Jester and the Something-Something datasets. The Jester dataset contains various dynamic and static hand gestures, whereas the Something-Something dataset contains actions of human-object interactions. The common characteristic of these two benchmarks is that the designed architectures need to capture the full temporal content of videos in order to correctly classify actions/gestures. Contrary to expectations, experimental results show that Recurrent Neural Network (RNN) based ST modeling techniques yield inferior results compared to other techniques such as fully convolutional architectures. Codes and pretrained models of this work are publicly available (https://github.com/fubel/stmodeling ).
AB - Understanding actions and gestures in video streams requires temporal reasoning of the spatial content from different time instants, i.e., spatiotemporal (ST) modeling. In this survey paper, we have made a comparative analysis of different ST modeling techniques for action and gesture recognition tasks. Since Convolutional Neural Networks (CNNs) are proved to be an effective tool as a feature extractor for static images, we apply ST modeling techniques on the features of static images from different time instants extracted by CNNs. All techniques are trained end-to-end together with a CNN feature extraction part and evaluated on two publicly available benchmarks: The Jester and the Something-Something datasets. The Jester dataset contains various dynamic and static hand gestures, whereas the Something-Something dataset contains actions of human-object interactions. The common characteristic of these two benchmarks is that the designed architectures need to capture the full temporal content of videos in order to correctly classify actions/gestures. Contrary to expectations, experimental results show that Recurrent Neural Network (RNN) based ST modeling techniques yield inferior results compared to other techniques such as fully convolutional architectures. Codes and pretrained models of this work are publicly available (https://github.com/fubel/stmodeling ).
KW - Action/gesture recognition
KW - Activity understanding
KW - CNNs
KW - RNNs
KW - Spatiotemporal modeling
UR - http://www.scopus.com/inward/record.url?scp=85103457862&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-68799-1_14
DO - 10.1007/978-3-030-68799-1_14
M3 - Conference contribution
AN - SCOPUS:85103457862
SN - 9783030687984
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 186
EP - 202
BT - Pattern Recognition. ICPR International Workshops and Challenges, 2021, Proceedings
A2 - Del Bimbo, Alberto
A2 - Cucchiara, Rita
A2 - Sclaroff, Stan
A2 - Farinella, Giovanni Maria
A2 - Mei, Tao
A2 - Bertini, Marco
A2 - Escalante, Hugo Jair
A2 - Vezzani, Roberto
PB - Springer Science and Business Media Deutschland GmbH
T2 - 25th International Conference on Pattern Recognition Workshops, ICPR 2020
Y2 - 10 January 2021 through 15 January 2021
ER -