TY - JOUR
T1 - Interpretable Classifiers Based on Time-Series Motifs for Lane Change Prediction
AU - Klein, Kathrin
AU - De Candido, Oliver
AU - Utschick, Wolfgang
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2023/7/1
Y1 - 2023/7/1
N2 - In this article, we address the problem of using non-interpretable Machine Learning (ML) algorithms in safety critical applications, especially automated driving functions. We focus on the lane change prediction of vehicles on a highway. In order to understand wrong decisions, which may lead to accidents, we want to interpret the reasons for a ML algorithm's decision making. To this end, we use motif discovery - a data mining method - to obtain sub-sequences representing typical driving behavior. With the help of these meaningful sub-sequences (motifs), we can study typical driving maneuvers on a highway. On top of this, we propose to replace non-interpretable ML algorithms with an interpretable alternative: a Mixture of Experts (MoE) classifier. We present an MoE classifier consisting of different k-Nearest Neighbors (k-NN) classifiers trained only on motifs, which represent a few samples from the dataset. These k-NN-based experts are fully interpretable, making the lane change prediction fully interpretable, too. Using our proposed MoE classifier, we are able to solve the lane change prediction problem in an interpretable manner. These MoE classifiers show a classification performance comparable to common non-interpretable ML methods from the literature.
AB - In this article, we address the problem of using non-interpretable Machine Learning (ML) algorithms in safety critical applications, especially automated driving functions. We focus on the lane change prediction of vehicles on a highway. In order to understand wrong decisions, which may lead to accidents, we want to interpret the reasons for a ML algorithm's decision making. To this end, we use motif discovery - a data mining method - to obtain sub-sequences representing typical driving behavior. With the help of these meaningful sub-sequences (motifs), we can study typical driving maneuvers on a highway. On top of this, we propose to replace non-interpretable ML algorithms with an interpretable alternative: a Mixture of Experts (MoE) classifier. We present an MoE classifier consisting of different k-Nearest Neighbors (k-NN) classifiers trained only on motifs, which represent a few samples from the dataset. These k-NN-based experts are fully interpretable, making the lane change prediction fully interpretable, too. Using our proposed MoE classifier, we are able to solve the lane change prediction problem in an interpretable manner. These MoE classifiers show a classification performance comparable to common non-interpretable ML methods from the literature.
KW - Lane change predictor
KW - automated driving function
KW - interpretable machine learning
KW - mixture of experts
UR - http://www.scopus.com/inward/record.url?scp=85160274958&partnerID=8YFLogxK
U2 - 10.1109/TIV.2023.3276650
DO - 10.1109/TIV.2023.3276650
M3 - Article
AN - SCOPUS:85160274958
SN - 2379-8858
VL - 8
SP - 3954
EP - 3961
JO - IEEE Transactions on Intelligent Vehicles
JF - IEEE Transactions on Intelligent Vehicles
IS - 7
ER -