Abstract
The performance of machine learning and deep learning algorithms for image analysis depends significantly on the quantity and quality of the training data. The generation of annotated training data is often costly, time-consuming and laborious. Data augmentation is a powerful option to overcome these drawbacks. Therefore, we augment training data by rendering images with arbitrary poses from 3D models to increase the quantity of training images. These training images usually show artifacts and are of limited use for advanced image analysis. Therefore, we propose to use image-to-image translation to transform images from a rendered domain to a captured domain. We show that translated images in the captured domain are of higher quality than the rendered images. Moreover, we demonstrate that image-to-image translation based on rendered 3D models enhances the performance of common computer vision tasks, namely feature matching, image retrieval and visual localization. The experimental results clearly show the enhancement on translated images over rendered images for all investigated tasks. In addition to this, we present the advantages utilizing translated images over exclusively captured images for visual localization.
Original language | English |
---|---|
Pages (from-to) | 111-119 |
Number of pages | 9 |
Journal | ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences |
Volume | 4 |
Issue number | 2/W7 |
DOIs | |
State | Published - 16 Sep 2019 |
Externally published | Yes |
Event | 1st Photogrammetric Image Analysis and Munich Remote Sensing Symposium, PIA 2019+MRSS 2019 - Munich, Germany Duration: 18 Sep 2019 → 20 Sep 2019 |
Keywords
- 3D Models
- Convolutional Neural Networks
- Data Augmentation
- Feature Matching
- Generative Adversarial Networks
- Image Retrieval
- Image-to-Image Translation
- Visual Localization