Maximum Acceptable Risk as Criterion for Decision-Making in Autonomous Vehicle Trajectory Planning

Maximilian Geisslinger, Rainer Trauth, Gemb Kaljavesi, Markus Lienkamp

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

Autonomous vehicles are being developed to make road traffic safer in the future. The time when autonomous vehicles are actually safe enough to be used in real traffic is a current subject of discussion between industry, science, and society. In our work, we propose a new approach to the risk assessment of autonomous vehicles based on risk-benefit analysis, as it is already established in other areas, such as the registration of pharmaceuticals. In this context, we address the question of socially acceptable risk for mobility and investigate this concept as a decision-making criterion in trajectory planning. We make the first attempt to quantify an accepted risk by comparing autonomous vehicles with other types of mobility while taking into account the ethical and psychological effects important to the acceptance of autonomous vehicles. We show how an accepted risk contributes to the transparent decision-making of autonomous vehicles at the maneuver level. Finally, we present a method for considering accepted risk in trajectory planning. The evaluation of this algorithm in a simulation of 2,000 scenarios reveals that lower risk thresholds can actually reduce risks in trajectory planning. The code used in this research is publicly available as open-source software: https://github.com/TUMFTM/EthicalTrajectoryPlanning.

Original languageEnglish
Pages (from-to)570-579
Number of pages10
JournalIEEE Open Journal of Intelligent Transportation Systems
Volume4
DOIs
StatePublished - 2023

Keywords

  • autonomous vehicles
  • decision-making
  • risk
  • trajectory planning

Fingerprint

Dive into the research topics of 'Maximum Acceptable Risk as Criterion for Decision-Making in Autonomous Vehicle Trajectory Planning'. Together they form a unique fingerprint.

Cite this