Abstract
A key problem in robotics is enabling an autonomous agent to perform human-like arm movements in close proximity to another human. However, modeling the human decision and control process of the movement during dyadic interaction presents a challenge. Although, most prior approaches rely on multicomponent robot motion planning architectures, we use data of two humans performing interfering arm reaching movements to extract and transfer interaction behavior control skill to a robotic agent. A recurrent neural network-based framework is constructed to learn a policy that computes control signals for a robot end effector in order to replace one human. The learned policy is benchmarked against unseen interaction data and a state-of-the-art learning from demonstration framework in simulated scenarios. We compare several architectures and investigate a new activation function of three stacked tanh(). The results show that the proposed framework successfully learns a policy to imitate human movement behavior control during dyadic interaction. The policy is transferred to a real robot and its feasibility for close-proximity human-robot interaction is shown.
Original language | English |
---|---|
Article number | 8424874 |
Pages (from-to) | 4100-4107 |
Number of pages | 8 |
Journal | IEEE Robotics and Automation Letters |
Volume | 3 |
Issue number | 4 |
DOIs | |
State | Published - Oct 2018 |
Keywords
- human-in-the-loop
- Human-robot interaction
- learning from demonstration
- recurrent neural networks