TY - JOUR
T1 - Transferring skills to humanoid robots by extracting semantic representations from observations of human activities
AU - Ramirez-Amaro, Karinne
AU - Beetz, Michael
AU - Cheng, Gordon
N1 - Publisher Copyright:
© 2015 Elsevier B.V.
PY - 2017/6/1
Y1 - 2017/6/1
N2 - In this study, we present a framework that infers human activities from observations using semantic representations. The proposed framework can be utilized to address the difficult and challenging problem of transferring tasks and skills to humanoid robots. We propose a method that allows robots to obtain and determine a higher-level understanding of a demonstrator's behavior via semantic representations. This abstraction from observations captures the “essence” of the activity, thereby indicating which aspect of the demonstrator's actions should be executed in order to accomplish the required activity. Thus, a meaningful semantic description is obtained in terms of human motions and object properties. In addition, we validated the semantic rules obtained in different conditions, i.e., three different and complex kitchen activities: 1) making a pancake; 2) making a sandwich; and 3) setting the table. We present quantitative and qualitative results, which demonstrate that without any further training, our system can deal with time restrictions, different execution styles of the same task by several participants, and different labeling strategies. This means, the rules obtained from one scenario are still valid even for new situations, which demonstrates that the inferred representations do not depend on the task performed. The results show that our system correctly recognized human behaviors in real-time in around 87.44% of cases, which was even better than a random participant recognizing the behaviors of another human (about 76.68%). In particular, the semantic rules acquired can be used to effectively improve the dynamic growth of the ontology-based knowledge representation. Hence, this method can be used flexibly across different demonstrations and constraints to infer and achieve a similar goal to that observed. Furthermore, the inference capability introduced in this study was integrated into a joint space control loop for a humanoid robot, an iCub, for achieving similar goals to the human demonstrator online.
AB - In this study, we present a framework that infers human activities from observations using semantic representations. The proposed framework can be utilized to address the difficult and challenging problem of transferring tasks and skills to humanoid robots. We propose a method that allows robots to obtain and determine a higher-level understanding of a demonstrator's behavior via semantic representations. This abstraction from observations captures the “essence” of the activity, thereby indicating which aspect of the demonstrator's actions should be executed in order to accomplish the required activity. Thus, a meaningful semantic description is obtained in terms of human motions and object properties. In addition, we validated the semantic rules obtained in different conditions, i.e., three different and complex kitchen activities: 1) making a pancake; 2) making a sandwich; and 3) setting the table. We present quantitative and qualitative results, which demonstrate that without any further training, our system can deal with time restrictions, different execution styles of the same task by several participants, and different labeling strategies. This means, the rules obtained from one scenario are still valid even for new situations, which demonstrates that the inferred representations do not depend on the task performed. The results show that our system correctly recognized human behaviors in real-time in around 87.44% of cases, which was even better than a random participant recognizing the behaviors of another human (about 76.68%). In particular, the semantic rules acquired can be used to effectively improve the dynamic growth of the ontology-based knowledge representation. Hence, this method can be used flexibly across different demonstrations and constraints to infer and achieve a similar goal to that observed. Furthermore, the inference capability introduced in this study was integrated into a joint space control loop for a humanoid robot, an iCub, for achieving similar goals to the human demonstrator online.
KW - Activity recognition
KW - Human understanding
KW - Knowledge-based
KW - Semantic representation
KW - Skill transfer
UR - http://www.scopus.com/inward/record.url?scp=84940863786&partnerID=8YFLogxK
U2 - 10.1016/j.artint.2015.08.009
DO - 10.1016/j.artint.2015.08.009
M3 - Article
AN - SCOPUS:84940863786
SN - 0004-3702
VL - 247
SP - 95
EP - 118
JO - Artificial Intelligence
JF - Artificial Intelligence
ER -