Interactive incremental learning of generalizable skills with local trajectory modulation

Markus Knauer, Alin Albu-Schaffer, Freek Stulp, Joao Silverio

Research output: Contribution to journalArticlepeer-review

Abstract

The problem of generalization in learning from demonstration (LfD) has received considerable attention over the years, particularly within the context of movement primitives, where a number of approaches have emerged. Recently, two important approaches have gained recognition. While one leverages via-points to adapt skills locally by modulating demonstrated trajectories, another relies on so-called task-parameterized (TP) models that encode movements with respect to different coordinate systems, using a product of probabilities for generalization. While the former are well-suited to precise, local modulations, the latter aim at generalizing over large regions of the workspace and often involve multiple objects. Addressing the quality of generalization by leveraging both approaches simultaneously has received little attention. In this work, we propose an interactive imitation learning framework that simultaneously leverages local and global modulations of trajectory distributions. Building on the kernelized movement primitives (KMP) framework, we introduce novel mechanisms for skill modulation from direct human corrective feedback. Our approach particularly exploits the concept of via-points to incrementally and interactively 1) improve the model accuracy locally, 2) add new objects to the task during execution and 3) extend the skill into regions where demonstrations were not provided.

Original languageEnglish
JournalIEEE Robotics and Automation Letters
DOIs
StateAccepted/In press - 2025

Keywords

  • Imitation Learning
  • Incremental Learning

Fingerprint

Dive into the research topics of 'Interactive incremental learning of generalizable skills with local trajectory modulation'. Together they form a unique fingerprint.

Cite this