Abstract
In this paper, we propose a generative framework to produce similar yet novel samples for a specified image. We then propose the use of these images as hard-negatives samples, within the framework of hard-negative mining, in order to improve the performance of classification networks in applications which suffer from sparse labelled training data. Our approach makes use of a variational autoencoder (VAE) which is trained in an adversarial manner in order to learn a latent distribution of the training data, as well as to be able to generate realistic, high quality image patches. We evaluate our proposed generative approach to hard-negative mining on a synthetic aperture radar (SAR) and optical image matching task. Using an existing SAR-optical matching network as the basis for our investigation, we compare the performance of the matching network trained using our approach to the baseline method, as well as to two other hard-negative mining methods. Our proposed generative architecture is able to generate realistic, very high resolution (VHR) SAR image patches which are almost indistinguishable from real imagery. Furthermore, using the patches as hard-negative samples, we are able to improve the overall accuracy, and significantly decrease the false positive rate of the SAR-optical matching task-thus validating our generative hard-negative mining approaches' applicability to improve training in data sparse applications.
Original language | English |
---|---|
Article number | 1552 |
Journal | Remote Sensing |
Volume | 10 |
Issue number | 10 |
DOIs | |
State | Published - 1 Oct 2018 |
Keywords
- Data fusion
- Dataset augmentation
- Generative adversarial networks
- Synthetic aperture radar