Multi-Frame GAN: Image Enhancement for Stereo Visual Odometry in Low Light

Eunah Jung, Nan Yang, Daniel Cremers

Publikation: Beitrag in FachzeitschriftKonferenzartikelBegutachtung

23 Zitate (Scopus)

Abstract

We propose the concept of a multi-frame GAN (MFGAN) and demonstrate its potential as an image sequence enhancement for stereo visual odometry in low light conditions. We base our method on an invertible adversarial network to transfer the beneficial features of brightly illuminated scenes to the sequence in poor illumination without costly paired datasets. In order to preserve the coherent geometric cues for the translated sequence, we present a novel network architecture as well as a novel loss term combining temporal and stereo consistencies based on optical flow estimation. We demonstrate that the enhanced sequences improve the performance of state-of-the-art feature-based and direct stereo visual odometry methods on both synthetic and real datasets in challenging illumination. We also show that MFGAN outperforms other state-of-the-art image enhancement and style transfer methods by a large margin in terms of visual odometry.

OriginalspracheEnglisch
Seiten (von - bis)651-660
Seitenumfang10
FachzeitschriftProceedings of Machine Learning Research
Jahrgang100
PublikationsstatusVeröffentlicht - 2019
Veranstaltung3rd Conference on Robot Learning, CoRL 2019 - Osaka, Japan
Dauer: 30 Okt. 20191 Nov. 2019

Fingerprint

Untersuchen Sie die Forschungsthemen von „Multi-Frame GAN: Image Enhancement for Stereo Visual Odometry in Low Light“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren