Learning Hand Movement Interaction Control Using RNNs: From HHI to HRI

Ozgur S. Oguz, Ben M. Pfirrmann, Mingpan Guo, Dirk Wollherr

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

A key problem in robotics is enabling an autonomous agent to perform human-like arm movements in close proximity to another human. However, modeling the human decision and control process of the movement during dyadic interaction presents a challenge. Although, most prior approaches rely on multicomponent robot motion planning architectures, we use data of two humans performing interfering arm reaching movements to extract and transfer interaction behavior control skill to a robotic agent. A recurrent neural network-based framework is constructed to learn a policy that computes control signals for a robot end effector in order to replace one human. The learned policy is benchmarked against unseen interaction data and a state-of-the-art learning from demonstration framework in simulated scenarios. We compare several architectures and investigate a new activation function of three stacked tanh(). The results show that the proposed framework successfully learns a policy to imitate human movement behavior control during dyadic interaction. The policy is transferred to a real robot and its feasibility for close-proximity human-robot interaction is shown.

Original languageEnglish
Article number8424874
Pages (from-to)4100-4107
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume3
Issue number4
DOIs
StatePublished - Oct 2018

Keywords

  • human-in-the-loop
  • Human-robot interaction
  • learning from demonstration
  • recurrent neural networks

Fingerprint

Dive into the research topics of 'Learning Hand Movement Interaction Control Using RNNs: From HHI to HRI'. Together they form a unique fingerprint.

Cite this