TY - GEN
T1 - Towards Trustworthy AI
T2 - 62nd IEEE Conference on Decision and Control, CDC 2023
AU - Zhong, Bingzhuo
AU - Liu, Siyuan
AU - Caccamo, Marco
AU - Zamani, Majid
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - In the past decade, artificial-intelligence-based (AI-based) techniques have been widely applied to design controllers over cyber-physical systems (CPSs) for complex control missions (e.g., motion planning in robotics). Nevertheless, AI-based controllers, particularly those developed based on deep neural networks, are typically very complex and are challenging to be formally verified. To cope with this issue, we propose a secure-by-construction architecture, namely Safe-Sec-visor architecture, to sandbox AI-based unverified controllers. By applying this architecture, the overall safety and security of CPSs can be ensured simultaneously, while formal verification over the AI-based controllers is not required. Here, we consider invariance and opacity properties as the desired safety and security properties, respectively. Accordingly, by leveraging a notion of (augmented) control barrier functions, we design a supervisor to check the control inputs provided by the AI-based controller and decide whether to accept them. At the same time, a safety-security advisor runs in parallel and provides fallback control inputs whenever the AI-based controller is rejected for safety and security reasons. To show the effectiveness of our approaches, we apply them to a case study on a quadrotor controlled by an AI-based controller. Here, the initial state of the quadrotor contains secret information which should not be revealed while the safety of the quadrotor should be ensured.
AB - In the past decade, artificial-intelligence-based (AI-based) techniques have been widely applied to design controllers over cyber-physical systems (CPSs) for complex control missions (e.g., motion planning in robotics). Nevertheless, AI-based controllers, particularly those developed based on deep neural networks, are typically very complex and are challenging to be formally verified. To cope with this issue, we propose a secure-by-construction architecture, namely Safe-Sec-visor architecture, to sandbox AI-based unverified controllers. By applying this architecture, the overall safety and security of CPSs can be ensured simultaneously, while formal verification over the AI-based controllers is not required. Here, we consider invariance and opacity properties as the desired safety and security properties, respectively. Accordingly, by leveraging a notion of (augmented) control barrier functions, we design a supervisor to check the control inputs provided by the AI-based controller and decide whether to accept them. At the same time, a safety-security advisor runs in parallel and provides fallback control inputs whenever the AI-based controller is rejected for safety and security reasons. To show the effectiveness of our approaches, we apply them to a case study on a quadrotor controlled by an AI-based controller. Here, the initial state of the quadrotor contains secret information which should not be revealed while the safety of the quadrotor should be ensured.
UR - http://www.scopus.com/inward/record.url?scp=85184830227&partnerID=8YFLogxK
U2 - 10.1109/CDC49753.2023.10384154
DO - 10.1109/CDC49753.2023.10384154
M3 - Conference contribution
AN - SCOPUS:85184830227
T3 - Proceedings of the IEEE Conference on Decision and Control
SP - 1833
EP - 1840
BT - 2023 62nd IEEE Conference on Decision and Control, CDC 2023
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 13 December 2023 through 15 December 2023
ER -