Abstract
Learningfromexpertdemonstrationstoflexiblyprogramanautonomoussystemwithcomplex behaviors or to predict an agent's behavior is a powerful tool, especially in collaborative control settings. A common method to solve this problem is inverse reinforcement learning (IRL), where the observed agent, e.g., a human demonstrator, is assumed to behave according to the optimization of an intrinsic cost function that reflects its intent and informs its control actions. While the framework is expressive, the inferred control policies generally lack convergence guarantees, which are critical for safe deployment in real-world settings. We therefore propose a novel, stability-certified IRL approach by reformulating the cost function inference problem to learning control Lyapunov functions (CLF) from demonstrations data. By additionally exploiting closed-form expressions for associated control policies, we are able to efficiently search the space of CLFs by observing the attractor landscape of the induced dynamics. For the construction of the inverse optimal CLFs, we use a Sum of Squares and formulate a convex optimization problem. We present a theoretical analysis of the optimality properties provided by the CLF and evaluate our approach using both simulated and real-world, human-generated data.
Original language | English |
---|---|
Pages (from-to) | 1-17 |
Number of pages | 17 |
Journal | IEEE Open Journal of Control Systems |
DOIs | |
State | Accepted/In press - 2024 |
Keywords
- Control Lyapunov function
- Convergence
- Convex optimization
- Cost function
- Costs
- Imitation learning
- Inverse optimality
- Inverse reinforcement learning
- Learning from demonstrations
- Optimal control
- Reinforcement learning
- Sum of Squares
- Task analysis