Abstract
Many approaches for testing automated and autonomous driving systems in dynamic traffic scenarios rely on the reuse of test cases, e.g., recording test scenarios during real test drives or creating 'test catalogs.' Both are widely used in industry and in literature. By counterexample, we show that the quality of test cases is system-dependent and that faulty system behavior may stay unrevealed during testing if test cases are naïvely re-used. We argue that, in general, system-specific 'good' test cases need to be generated. Thus, recorded scenarios in general cannot simply be used for testing, and regression testing strategies needs to be rethought for automated and autonomous driving systems. The counterexample involves a system built according to state-of-the-art literature, which is tested in a traffic scenario using a high-fidelity physical simulation tool. Test scenarios are generated using standard techniques from the literature and state-of-the-art methodologies. By comparing the quality of test cases, we argue against a naïve re-use of test cases.
Original language | English |
---|---|
Pages | 1305-1310 |
Number of pages | 6 |
DOIs | |
State | Published - 2020 |
Event | 31st IEEE Intelligent Vehicles Symposium, IV 2020 - Virtual, Las Vegas, United States Duration: 19 Oct 2020 → 13 Nov 2020 |
Conference
Conference | 31st IEEE Intelligent Vehicles Symposium, IV 2020 |
---|---|
Country/Territory | United States |
City | Virtual, Las Vegas |
Period | 19/10/20 → 13/11/20 |