LEARNING to ACT from OBSERVATION and PRACTICE

Darrin C. Bentivegna, Christopher G. Atkeson, Aleš Ude, Gordon Cheng

Research output: Contribution to journalArticlepeer-review

39 Scopus citations

Abstract

We present a method for humanoid robots to quickly learn new dynamic tasks from observing others and from practice. Ways in which the robot can adapt to initial and also changing conditions are described. Agents are given domain knowledge in the form of task primitives. A key element of our approach is to break learning problems up into as many simple learning problems as possible. We present a case study of a humanoid robot learning to play air hockey.

Original languageEnglish
Pages (from-to)585-611
Number of pages27
JournalInternational Journal of Humanoid Robotics
Volume1
Issue number4
DOIs
StatePublished - 1 Dec 2004
Externally publishedYes

Keywords

  • Air hockey
  • Humanoid robot
  • Imitation
  • Learning from observation
  • Locally weighted learning
  • Movement primitives

Fingerprint

Dive into the research topics of 'LEARNING to ACT from OBSERVATION and PRACTICE'. Together they form a unique fingerprint.

Cite this