Interpretable Early Prediction of Lane Changes Using a Constrained Neural Network Architecture

Oliver Gallitz, Oliver De Candido, Michael Botsch, Wolfgang Utschick

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

This paper proposes an interpretable machine learning structure for the early prediction of lane changes. The interpretability relies on interpretable templates, as well as constrained weights during the training process of a neural network. It is shown, that each template is separable and interpretable by means of automatically generated rule sets. For the validation of the proposed method, a publicly available dataset is used. The architecture is compared to reference publications that apply recurrent neural networks to the task of lane change prediction. The proposed method significantly improves the maximum prediction time of the lane changes while keeping low false alarm rates.

Original languageEnglish
Title of host publication2021 IEEE International Intelligent Transportation Systems Conference, ITSC 2021
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages493-499
Number of pages7
ISBN (Electronic)9781728191423
DOIs
StatePublished - 19 Sep 2021
Event2021 IEEE International Intelligent Transportation Systems Conference, ITSC 2021 - Indianapolis, United States
Duration: 19 Sep 202122 Sep 2021

Publication series

NameIEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC
Volume2021-September

Conference

Conference2021 IEEE International Intelligent Transportation Systems Conference, ITSC 2021
Country/TerritoryUnited States
CityIndianapolis
Period19/09/2122/09/21

Fingerprint

Dive into the research topics of 'Interpretable Early Prediction of Lane Changes Using a Constrained Neural Network Architecture'. Together they form a unique fingerprint.

Cite this