TY - JOUR
T1 - Learn to See Fast
T2 - Lessons Learned From Autonomous Racing on How to Develop Perception Systems
AU - Sauerbeck, Florian
AU - Huch, Sebastian
AU - Fent, Felix
AU - Karle, Phillip
AU - Kulmer, Dominik
AU - Betz, Johannes
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2023
Y1 - 2023
N2 - The objective of this work is to provide a comprehensive understanding of the development of autonomous vehicle perception systems. So far, most autonomy perception research has been concentrated on improving perception systems' algorithmic quality or combining different sensor setups. In our work, we draw conclusions from participating in the Indy Autonomous Challenge 2021 and its follow-up event in Las Vegas 2022. These were the first head-to-head autonomous racing competitions that required an entire perception pipeline to perceive the environment and the opposing surrounding vehicles. Our research includes quantitative results from collected vehicle data and qualitative results from simulation, video, and multiple race analysis. The Indy Autonomous Challenge was one of the few research projects that considered the entire autonomous vehicle. Therefore, our findings indicate insights on the system level, including hardware setup and full-stack software. We can demonstrate that different sensor modalities in the vehicle have strengths and weaknesses when they are deployed. Our results further show the difficulties and challenges that emerge when multi-modal perception systems must run in real-time on real-world autonomous vehicles. The most concise finding from our investigation is the summary of critical learnings when developing and deploying perception systems for autonomous systems. Given the background of the study, it was inevitable that our conclusions were influenced by driving on the racetrack and only one hardware setup available. Therefore, in the discussion, we draw further parallels to driving on public roads in dense traffic. More studies are needed to investigate the development and deployment of multi-modal perception systems for autonomous road vehicles with different hardware setups and various object detection, localization, and prediction algorithms. The novel contributions of this work are given by 12 lessons learned, summarized in 5 categories. These were derived and validated through a realized real-world application project. The videos of the final events in Indianapolis and Las Vegas can be watched here:IAC: https://www.youtube.com/watch?v=ERTffn3IpIs&ab_channel=CNETHighlightsAC@CES: https://www.youtube.com/watch?v=df9f4Qfa0uU&ab_channel=CNETHighlightsMultiple modules of the software stack are open source: https://github.com/TUMFTM.
AB - The objective of this work is to provide a comprehensive understanding of the development of autonomous vehicle perception systems. So far, most autonomy perception research has been concentrated on improving perception systems' algorithmic quality or combining different sensor setups. In our work, we draw conclusions from participating in the Indy Autonomous Challenge 2021 and its follow-up event in Las Vegas 2022. These were the first head-to-head autonomous racing competitions that required an entire perception pipeline to perceive the environment and the opposing surrounding vehicles. Our research includes quantitative results from collected vehicle data and qualitative results from simulation, video, and multiple race analysis. The Indy Autonomous Challenge was one of the few research projects that considered the entire autonomous vehicle. Therefore, our findings indicate insights on the system level, including hardware setup and full-stack software. We can demonstrate that different sensor modalities in the vehicle have strengths and weaknesses when they are deployed. Our results further show the difficulties and challenges that emerge when multi-modal perception systems must run in real-time on real-world autonomous vehicles. The most concise finding from our investigation is the summary of critical learnings when developing and deploying perception systems for autonomous systems. Given the background of the study, it was inevitable that our conclusions were influenced by driving on the racetrack and only one hardware setup available. Therefore, in the discussion, we draw further parallels to driving on public roads in dense traffic. More studies are needed to investigate the development and deployment of multi-modal perception systems for autonomous road vehicles with different hardware setups and various object detection, localization, and prediction algorithms. The novel contributions of this work are given by 12 lessons learned, summarized in 5 categories. These were derived and validated through a realized real-world application project. The videos of the final events in Indianapolis and Las Vegas can be watched here:IAC: https://www.youtube.com/watch?v=ERTffn3IpIs&ab_channel=CNETHighlightsAC@CES: https://www.youtube.com/watch?v=df9f4Qfa0uU&ab_channel=CNETHighlightsMultiple modules of the software stack are open source: https://github.com/TUMFTM.
KW - Autonomous racing
KW - autonomous vehicles
KW - perception systems
KW - software development
UR - http://www.scopus.com/inward/record.url?scp=85159679699&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2023.3272750
DO - 10.1109/ACCESS.2023.3272750
M3 - Article
AN - SCOPUS:85159679699
SN - 2169-3536
VL - 11
SP - 44034
EP - 44050
JO - IEEE Access
JF - IEEE Access
ER -