TY - GEN
T1 - Towards Safe AI
T2 - 37th AAAI Conference on Artificial Intelligence, AAAI 2023
AU - Zhong, Bingzhuo
AU - Cao, Hongpeng
AU - Zamani, Majid
AU - Caccamo, Marco
N1 - Publisher Copyright:
Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2023/6/27
Y1 - 2023/6/27
N2 - Nowadays, AI-based techniques, such as deep neural networks (DNNs), are widely deployed in autonomous systems for complex mission requirements (e.g., motion planning in robotics). However, DNNs-based controllers are typically very complex, and it is very hard to formally verify their correctness, potentially causing severe risks for safety-critical autonomous systems. In this paper, we propose a construction scheme for a so-called Safe-visor architecture to sandbox DNNs-based controllers. Particularly, we consider the construction under a stochastic game framework to provide a system-level safety guarantee which is robust to noises and disturbances. A supervisor is built to check the control inputs provided by a DNNs-based controller and decide whether to accept them. Meanwhile, a safety advisor is running in parallel to provide fallback control inputs in case the DNNs-based controller is rejected. We demonstrate the proposed approaches on a quadrotor employing an unverified DNNs-based controller.
AB - Nowadays, AI-based techniques, such as deep neural networks (DNNs), are widely deployed in autonomous systems for complex mission requirements (e.g., motion planning in robotics). However, DNNs-based controllers are typically very complex, and it is very hard to formally verify their correctness, potentially causing severe risks for safety-critical autonomous systems. In this paper, we propose a construction scheme for a so-called Safe-visor architecture to sandbox DNNs-based controllers. Particularly, we consider the construction under a stochastic game framework to provide a system-level safety guarantee which is robust to noises and disturbances. A supervisor is built to check the control inputs provided by a DNNs-based controller and decide whether to accept them. Meanwhile, a safety advisor is running in parallel to provide fallback control inputs in case the DNNs-based controller is rejected. We demonstrate the proposed approaches on a quadrotor employing an unverified DNNs-based controller.
UR - http://www.scopus.com/inward/record.url?scp=85168016142&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85168016142
T3 - Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023
SP - 15340
EP - 15349
BT - AAAI-23 Special Tracks
A2 - Williams, Brian
A2 - Chen, Yiling
A2 - Neville, Jennifer
PB - AAAI Press
Y2 - 7 February 2023 through 14 February 2023
ER -