TY - GEN
T1 - Robust semantic representations for inferring human co-manipulation activities even with different demonstration styles
AU - Ramirez-Amaro, Karinne
AU - Dean-Leon, Emmanuel
AU - Cheng, Gordon
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2015/12/22
Y1 - 2015/12/22
N2 - In this work we present a novel method that generates compact semantic models for inferring human coordinated activities, including tasks that require the understanding of dual arms sequencing. These models are robust and invariant to observation from different executions styles of the same activity. Additionally, the obtained semantic representations are able to re-use the acquired knowledge to infer different types of activities. Furthermore, our method is capable to infer dual-arm co-manipulation activities and it considers the correct synchronization between the inferred activities to achieve the desired common goal. We propose a system that, rather than focusing on the different execution styles, extracts the meaning of the observed task by means of semantic representations. The proposed method is a hierarchical approach that first extracts the relevant information from the observations. Then, it infers the observed human activities based on the obtained semantic representations. After that, these inferred activities can be used to trigger motion primitives in a robot to execute the demonstrated task. In order to validate the portability of our system, we have evaluated our semantic-based method on two different humanoid platforms, the iCub robot and REEM-C robot. Demonstrating that our system is capable to correctly segment and infer on-line the observed activities with an average accuracy of 84.8%.
AB - In this work we present a novel method that generates compact semantic models for inferring human coordinated activities, including tasks that require the understanding of dual arms sequencing. These models are robust and invariant to observation from different executions styles of the same activity. Additionally, the obtained semantic representations are able to re-use the acquired knowledge to infer different types of activities. Furthermore, our method is capable to infer dual-arm co-manipulation activities and it considers the correct synchronization between the inferred activities to achieve the desired common goal. We propose a system that, rather than focusing on the different execution styles, extracts the meaning of the observed task by means of semantic representations. The proposed method is a hierarchical approach that first extracts the relevant information from the observations. Then, it infers the observed human activities based on the obtained semantic representations. After that, these inferred activities can be used to trigger motion primitives in a robot to execute the demonstrated task. In order to validate the portability of our system, we have evaluated our semantic-based method on two different humanoid platforms, the iCub robot and REEM-C robot. Demonstrating that our system is capable to correctly segment and infer on-line the observed activities with an average accuracy of 84.8%.
KW - Data mining
KW - Feature extraction
KW - Hidden Markov models
KW - Motion segmentation
KW - Robustness
KW - Semantics
UR - http://www.scopus.com/inward/record.url?scp=84962326694&partnerID=8YFLogxK
U2 - 10.1109/HUMANOIDS.2015.7363496
DO - 10.1109/HUMANOIDS.2015.7363496
M3 - Conference contribution
AN - SCOPUS:84962326694
T3 - IEEE-RAS International Conference on Humanoid Robots
SP - 1141
EP - 1146
BT - Humanoids 2015
PB - IEEE Computer Society
T2 - 15th IEEE RAS International Conference on Humanoid Robots, Humanoids 2015
Y2 - 3 November 2015 through 5 November 2015
ER -