TY - JOUR
T1 - THÖR-MAGNI
T2 - A large-scale indoor motion capture recording of human movement and robot interaction
AU - Schreiter, Tim
AU - Rodrigues de Almeida, Tiago
AU - Zhu, Yufei
AU - Gutierrez Maestro, Eduardo
AU - Morillo-Mendez, Lucas
AU - Rudenko, Andrey
AU - Palmieri, Luigi
AU - Kucner, Tomasz P.
AU - Magnusson, Martin
AU - Lilienthal, Achim J.
N1 - Publisher Copyright:
© The Author(s) 2024.
PY - 2024
Y1 - 2024
N2 - We present a new large dataset of indoor human and robot navigation and interaction, called THÖR-MAGNI, that is designed to facilitate research on social human navigation: for example, modeling and predicting human motion, analyzing goal-oriented interactions between humans and robots, and investigating visual attention in a social interaction context. THÖR-MAGNI was created to fill a gap in available datasets for human motion analysis and HRI. This gap is characterized by a lack of comprehensive inclusion of exogenous factors and essential target agent cues, which hinders the development of robust models capable of capturing the relationship between contextual cues and human behavior in different scenarios. Unlike existing datasets, THÖR-MAGNI includes a broader set of contextual features and offers multiple scenario variations to facilitate factor isolation. The dataset includes many social human–human and human–robot interaction scenarios, rich context annotations, and multi-modal data, such as walking trajectories, gaze-tracking data, and lidar and camera streams recorded from a mobile robot. We also provide a set of tools for visualization and processing of the recorded data. THÖR-MAGNI is, to the best of our knowledge, unique in the amount and diversity of sensor data collected in a contextualized and socially dynamic environment, capturing natural human–robot interactions.
AB - We present a new large dataset of indoor human and robot navigation and interaction, called THÖR-MAGNI, that is designed to facilitate research on social human navigation: for example, modeling and predicting human motion, analyzing goal-oriented interactions between humans and robots, and investigating visual attention in a social interaction context. THÖR-MAGNI was created to fill a gap in available datasets for human motion analysis and HRI. This gap is characterized by a lack of comprehensive inclusion of exogenous factors and essential target agent cues, which hinders the development of robust models capable of capturing the relationship between contextual cues and human behavior in different scenarios. Unlike existing datasets, THÖR-MAGNI includes a broader set of contextual features and offers multiple scenario variations to facilitate factor isolation. The dataset includes many social human–human and human–robot interaction scenarios, rich context annotations, and multi-modal data, such as walking trajectories, gaze-tracking data, and lidar and camera streams recorded from a mobile robot. We also provide a set of tools for visualization and processing of the recorded data. THÖR-MAGNI is, to the best of our knowledge, unique in the amount and diversity of sensor data collected in a contextualized and socially dynamic environment, capturing natural human–robot interactions.
KW - Dataset for human motion
KW - human trajectory prediction
KW - human-aware motion planning
KW - human–robot collaboration
KW - social HRI
UR - http://www.scopus.com/inward/record.url?scp=85206993138&partnerID=8YFLogxK
U2 - 10.1177/02783649241274794
DO - 10.1177/02783649241274794
M3 - Article
AN - SCOPUS:85206993138
SN - 0278-3649
JO - International Journal of Robotics Research
JF - International Journal of Robotics Research
ER -