Off-Policy Risk-Sensitive Reinforcement Learning-Based Constrained Robust Optimal Control

Cong Li, Qingchen Liu, Zhehua Zhou, Martin Buss, Fangzhou Liu

Publikation: Beitrag in FachzeitschriftArtikelBegutachtung

2 Zitate (Scopus)

Abstract

This article proposes an off-policy risk-sensitive reinforcement learning (RL)-based control framework to jointly optimize the task performance and constraint satisfaction in a disturbed environment. The risk-aware value function, constructed using the pseudo control and risk-sensitive input and state penalty terms, is introduced to convert the original constrained robust stabilization problem into an equivalent unconstrained optimal control problem. Then, an off-policy RL algorithm is developed to learn the approximate solution to the risk-aware value function. During the learning process, the associated approximate optimal control policy is able to satisfy both input and state constraints under disturbances. By replaying experience data to the off-policy weight update law of the critic neural network, the weight convergence is guaranteed. Moreover, online and offline algorithms are developed to serve as principled ways to record informative experience data to achieve a sufficient excitation required for the weight convergence. The proofs of system stability and weight convergence are provided. The Simulation results reveal the validity of the proposed control framework.

OriginalspracheEnglisch
Seiten (von - bis)2478-2491
Seitenumfang14
FachzeitschriftIEEE Transactions on Systems, Man, and Cybernetics: Systems
Jahrgang53
Ausgabenummer4
DOIs
PublikationsstatusVeröffentlicht - 1 Apr. 2023

Fingerprint

Untersuchen Sie die Forschungsthemen von „Off-Policy Risk-Sensitive Reinforcement Learning-Based Constrained Robust Optimal Control“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren