Abstract
In this paper we present a robot control architecture for learning by imitation which takes inspiration from recent discoveries in action observation/execution experiments with humans and other primates. The architecture implements two basic processing principles: (1) imitation is primarily directed toward reproducing the outcome of an observed action sequence rather than reproducing the exact action means, and (2) the required capacity to understand the motor intention of another agent is based on motor simulation. The control architecture is validated in a robot system imitating in a goal-directed manner a grasping and placing sequence displayed by a human model. During imitation, skill transfer occurs by learning and representing appropriate goal-directed sequences of motor primitives. The robustness of the goal-directed organization of the controller is tested in the presence of incomplete visual information and changes in environmental constraints.
Original language | English |
---|---|
Pages (from-to) | 353-360 |
Number of pages | 8 |
Journal | Robotics and Autonomous Systems |
Volume | 54 |
Issue number | 5 |
DOIs | |
State | Published - 31 May 2006 |
Keywords
- Action sequence
- Dynamic field
- Imitation learning
- Mirror neurons