TY - GEN
T1 - Visual focus of attention recognition from fixed chair sitting postures using RGB-D data
AU - Wolfram, Michael
AU - Ali, Haider
AU - Albu-Schäffer, Alin
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2017/1/18
Y1 - 2017/1/18
N2 - Person Activity Recognition is an important and active area of research in many robotic applications such as Human-Robot Collaboration and assisted living systems. In these fields, the focus is often on the estimation of the visual focus of attention of a person. Considering the set of fixed chair sitting scenarios where only the upper body is visible, in this paper we focus on the person's head as an important cue for visual focus of attention estimation. A non-intrusive sensor setup consisting of one single RGB-D camera in front of the person is chosen to monitor the visual focus of attention in an indoor office environment. We propose an extension of the existing head pose estimation method from [1]. The method has been evaluated on existing benchmarking databases (Biwi [2] and VAP [3]). Additionally, we also propose a new database (DLR FC-PEAR) acquired with the Microsoft Kinect v2. To evaluate the generalizability of our proposed extension, we have also performed the final evaluation across domains. Finally, we present the experimental results and an analysis about the limitations of our proposed framework.
AB - Person Activity Recognition is an important and active area of research in many robotic applications such as Human-Robot Collaboration and assisted living systems. In these fields, the focus is often on the estimation of the visual focus of attention of a person. Considering the set of fixed chair sitting scenarios where only the upper body is visible, in this paper we focus on the person's head as an important cue for visual focus of attention estimation. A non-intrusive sensor setup consisting of one single RGB-D camera in front of the person is chosen to monitor the visual focus of attention in an indoor office environment. We propose an extension of the existing head pose estimation method from [1]. The method has been evaluated on existing benchmarking databases (Biwi [2] and VAP [3]). Additionally, we also propose a new database (DLR FC-PEAR) acquired with the Microsoft Kinect v2. To evaluate the generalizability of our proposed extension, we have also performed the final evaluation across domains. Finally, we present the experimental results and an analysis about the limitations of our proposed framework.
UR - http://www.scopus.com/inward/record.url?scp=85015196549&partnerID=8YFLogxK
U2 - 10.1109/ISM.2016.31
DO - 10.1109/ISM.2016.31
M3 - Conference contribution
AN - SCOPUS:85015196549
T3 - Proceedings - 2016 IEEE International Symposium on Multimedia, ISM 2016
SP - 325
EP - 328
BT - Proceedings - 2016 IEEE International Symposium on Multimedia, ISM 2016
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 18th IEEE International Symposium on Multimedia, ISM 2016
Y2 - 11 December 2016 through 13 December 2016
ER -