Inverse Reinforcement Learning: A Control Lyapunov Approach

Samuel Tesfazgi, Armin Lederer, Sandra Hirche

Publikation: Beitrag in Buch/Bericht/KonferenzbandKonferenzbeitragBegutachtung

6 Zitate (Scopus)

Abstract

Inferring the intent of an intelligent agent from demonstrations and subsequently predicting its behavior, is a critical task in many collaborative settings. A common approach to solve this problem is the framework of inverse reinforcement learning (IRL), where the observed agent, e.g., a human demonstrator, is assumed to behave according to an intrinsic cost function that reflects its intent and informs its control actions. In this work, we reformulate the IRL inference problem to learning control Lyapunov functions (CLF) from demonstrations by exploiting the inverse optimality property, which states that every CLF is also a meaningful value function. Moreover, the derived CLF formulation directly guarantees stability of the system under the inferred control policies. We show the flexibility of our proposed method by learning from goal-directed movement demonstrations in a continuous environment.

OriginalspracheEnglisch
Titel60th IEEE Conference on Decision and Control, CDC 2021
Herausgeber (Verlag)Institute of Electrical and Electronics Engineers Inc.
Seiten3627-3632
Seitenumfang6
ISBN (elektronisch)9781665436595
DOIs
PublikationsstatusVeröffentlicht - 2021
Veranstaltung60th IEEE Conference on Decision and Control, CDC 2021 - Austin, USA/Vereinigte Staaten
Dauer: 13 Dez. 202117 Dez. 2021

Publikationsreihe

NameProceedings of the IEEE Conference on Decision and Control
Band2021-December
ISSN (Print)0743-1546
ISSN (elektronisch)2576-2370

Konferenz

Konferenz60th IEEE Conference on Decision and Control, CDC 2021
Land/GebietUSA/Vereinigte Staaten
OrtAustin
Zeitraum13/12/2117/12/21

Fingerprint

Untersuchen Sie die Forschungsthemen von „Inverse Reinforcement Learning: A Control Lyapunov Approach“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren