TY - GEN
T1 - Towards trustworthy AI
T2 - 021 Workshop on Computation-Aware Algorithmic Design for Cyber-Physical Systems, CAADCPS 2021
AU - Lavaei, Abolfazl
AU - Zhong, Bingzhuo
AU - Caccamo, Marco
AU - Zamani, Majid
N1 - Publisher Copyright:
© 2021 ACM.
PY - 2021/5/19
Y1 - 2021/5/19
N2 - Artificial intelligence-based (a.k.a. AI-based) controllers have received significant attentions in the past few years due to their broad applications in cyber-physical systems (CPSs) to accomplish complex control missions. However, guaranteeing safety and reliability of CPSs equipped with this kind of (uncertified) controllers is currently very challenging, which is of vital importance in many real-life safety-critical applications. To cope with this difficulty, we propose a Safe-visor architecture for sandboxing AI-based controllers in stochastic CPSs. The proposed framework contains (i) a history-based supervisor which checks inputs from the AI-based controller and makes compromise between functionality and safety of the system, and (ii) a safety advisor that provides fallback when the AI-based controller endangers the safety of the system. By employing this architecture, we provide formal probabilistic guarantees on the satisfaction of those classes of safety specifications which can be represented by the accepting languages of deterministic finite automata (DFA), while AI-based controllers can still be employed in the control loop even though they are not reliable.
AB - Artificial intelligence-based (a.k.a. AI-based) controllers have received significant attentions in the past few years due to their broad applications in cyber-physical systems (CPSs) to accomplish complex control missions. However, guaranteeing safety and reliability of CPSs equipped with this kind of (uncertified) controllers is currently very challenging, which is of vital importance in many real-life safety-critical applications. To cope with this difficulty, we propose a Safe-visor architecture for sandboxing AI-based controllers in stochastic CPSs. The proposed framework contains (i) a history-based supervisor which checks inputs from the AI-based controller and makes compromise between functionality and safety of the system, and (ii) a safety advisor that provides fallback when the AI-based controller endangers the safety of the system. By employing this architecture, we provide formal probabilistic guarantees on the satisfaction of those classes of safety specifications which can be represented by the accepting languages of deterministic finite automata (DFA), while AI-based controllers can still be employed in the control loop even though they are not reliable.
KW - AI-based controllers
KW - Artificial intelligence
KW - Safe-visor architecture
KW - Stochastic cyber-physical systems
KW - Trustworthy AI
UR - http://www.scopus.com/inward/record.url?scp=85110753198&partnerID=8YFLogxK
U2 - 10.1145/3457335.3461705
DO - 10.1145/3457335.3461705
M3 - Conference contribution
AN - SCOPUS:85110753198
T3 - Proceedings of 2021 Workshop on Computation-Aware Algorithmic Design for Cyber-Physical Systems, CAADCPS 2021
SP - 7
EP - 8
BT - Proceedings of 2021 Workshop on Computation-Aware Algorithmic Design for Cyber-Physical Systems, CAADCPS 2021
PB - Association for Computing Machinery, Inc
Y2 - 18 May 2021 through 21 May 2021
ER -