Interpretable PID parameter tuning for control engineering using general dynamic neural networks: An extensive comparison

Johannes Günther, Elias Reichensdörfer, Patrick M. Pilarski, Klaus Diepold

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

Modern automation systems largely rely on closed loop control, wherein a controller interacts with a controlled process via actions, based on observations. These systems are increasingly complex, yet most deployed controllers are linear Proportional-Integral-Derivative (PID) controllers. PID controllers perform well on linear and near-linear systems but their simplicity is at odds with the robustness required to reliably control complex processes. Modern machine learning techniques offer a way to extend PID controllers beyond their linear control capabilities by using neural networks. However, such an extension comes at the cost of losing stability guarantees and controller interpretability. In this paper, we examine the utility of extending PID controllers with recurrent neural networks—–namely, General Dynamic Neural Networks (GDNN); we show that GDNN (neural) PID controllers perform well on a range of complex control systems and highlight how they can be a scalable and interpretable option for modern control systems. To do so, we provide an extensive study using four benchmark systems that represent the most common control engineering benchmarks. All control environments are evaluated with and without noise as well as with and without disturbances. The neural PID controller performs better than standard PID control in 15 of 16 tasks and better than model-based control in 13 of 16 tasks. As a second contribution, we address the lack of interpretability that prevents neural networks from being used in real-world control processes. We use bounded-input bounded-output stability analysis to evaluate the parameters suggested by the neural network, making them understandable for engineers. This combination of rigorous evaluation paired with better interpretability is an important step towards the acceptance of neural-network-based control approaches for real-world systems. It is furthermore an important step towards interpretable and safely applied artificial intelligence.

Original languageEnglish
Article numbere0243320
JournalPLoS ONE
Volume15
Issue number12 December
DOIs
StatePublished - Dec 2020

Fingerprint

Dive into the research topics of 'Interpretable PID parameter tuning for control engineering using general dynamic neural networks: An extensive comparison'. Together they form a unique fingerprint.

Cite this