Transferring skills to humanoid robots by extracting semantic representations from observations of human activities

Karinne Ramirez-Amaro, Michael Beetz, Gordon Cheng

Research output: Contribution to journalArticlepeer-review

90 Scopus citations

Abstract

In this study, we present a framework that infers human activities from observations using semantic representations. The proposed framework can be utilized to address the difficult and challenging problem of transferring tasks and skills to humanoid robots. We propose a method that allows robots to obtain and determine a higher-level understanding of a demonstrator's behavior via semantic representations. This abstraction from observations captures the “essence” of the activity, thereby indicating which aspect of the demonstrator's actions should be executed in order to accomplish the required activity. Thus, a meaningful semantic description is obtained in terms of human motions and object properties. In addition, we validated the semantic rules obtained in different conditions, i.e., three different and complex kitchen activities: 1) making a pancake; 2) making a sandwich; and 3) setting the table. We present quantitative and qualitative results, which demonstrate that without any further training, our system can deal with time restrictions, different execution styles of the same task by several participants, and different labeling strategies. This means, the rules obtained from one scenario are still valid even for new situations, which demonstrates that the inferred representations do not depend on the task performed. The results show that our system correctly recognized human behaviors in real-time in around 87.44% of cases, which was even better than a random participant recognizing the behaviors of another human (about 76.68%). In particular, the semantic rules acquired can be used to effectively improve the dynamic growth of the ontology-based knowledge representation. Hence, this method can be used flexibly across different demonstrations and constraints to infer and achieve a similar goal to that observed. Furthermore, the inference capability introduced in this study was integrated into a joint space control loop for a humanoid robot, an iCub, for achieving similar goals to the human demonstrator online.

Original languageEnglish
Pages (from-to)95-118
Number of pages24
JournalArtificial Intelligence
Volume247
DOIs
StatePublished - 1 Jun 2017

Keywords

  • Activity recognition
  • Human understanding
  • Knowledge-based
  • Semantic representation
  • Skill transfer

Fingerprint

Dive into the research topics of 'Transferring skills to humanoid robots by extracting semantic representations from observations of human activities'. Together they form a unique fingerprint.

Cite this