Abstract
The availability of real-world data is a key element for novel developments in the fields of automotive and traffic research. Aerial imagery has the major advantage of recording multiple objects simultaneously and overcomes limitations such as occlusions. However, there are only few data sets available. This work describes a process to estimate a precise vehicle position from aerial imagery. A robust object detection is crucial for reliable results, hence the state-of-the-art deep neural network Mask-RCNN is applied for that purpose. Two training data sets are employed: The first one is optimized for detecting the test vehicle, while the second one consists of randomly selected images recorded on public roads. To reduce errors, several aspects are accounted for, such as the drone movement and the perspective projection from a photograph. The estimated position is comapared with a reference system installed in the test vehicle. It is shown, that a mean accuracy of 20 cm can be achieved with flight altitudes up to 100 m, Full-HD resolution and a frame-by-frame detection. A reliable position estimation is the basis for further data processing, such as obtaining additional vehicle state variables. The source code, training weights, labeled data and example videos are made publicly available. This supports researchers to create new traffic data sets with specific local conditions.
Original language | English |
---|---|
Pages | 2089-2096 |
Number of pages | 8 |
DOIs | |
State | Published - 2020 |
Externally published | Yes |
Event | 31st IEEE Intelligent Vehicles Symposium, IV 2020 - Virtual, Las Vegas, United States Duration: 19 Oct 2020 → 13 Nov 2020 |
Conference
Conference | 31st IEEE Intelligent Vehicles Symposium, IV 2020 |
---|---|
Country/Territory | United States |
City | Virtual, Las Vegas |
Period | 19/10/20 → 13/11/20 |