REVERSE ERROR MODELING FOR IMPROVED SEMANTIC SEGMENTATION

Christopher B. Kuhn, Markus Hofbauer, Goran Petrovic, Eckehard Steinbach

Publikation: Beitrag in Buch/Bericht/KonferenzbandKonferenzbeitragBegutachtung

2 Zitate (Scopus)

Abstract

We propose the concept of error-reversing autoencoders (ERA) for correcting pixel-wise errors made by an arbitrary semantic segmentation model. For this, we reframe the segmentation model as an error function applied to the ground truth labels. Then, we train an autoencoder to reverse this error function. During testing, the autoencoder reverses the approximated error function to correct the classification errors. We consider two sources of errors. First, we target the errors made by a model despite having being trained with clean, accurately labeled images. In this case, our proposed approach achieves an improvement of around 1% on the Cityscapes data set with the state-of-the-art DeepLabV3+ model. Second, we target errors introduced by compromised images. With JPEG-compressed images as input, our approach improves the segmentation performance by over 70% for high levels of compression. The proposed architecture is simple to implement, fast to train and can be applied to any semantic segmentation model as a post-processing step.

OriginalspracheEnglisch
Titel2022 IEEE International Conference on Image Processing, ICIP 2022 - Proceedings
Herausgeber (Verlag)IEEE Computer Society
Seiten106-110
Seitenumfang5
ISBN (elektronisch)9781665496209
DOIs
PublikationsstatusVeröffentlicht - 2022
Veranstaltung29th IEEE International Conference on Image Processing, ICIP 2022 - Bordeaux, Frankreich
Dauer: 16 Okt. 202219 Okt. 2022

Publikationsreihe

NameProceedings - International Conference on Image Processing, ICIP
ISSN (Print)1522-4880

Konferenz

Konferenz29th IEEE International Conference on Image Processing, ICIP 2022
Land/GebietFrankreich
OrtBordeaux
Zeitraum16/10/2219/10/22

Fingerprint

Untersuchen Sie die Forschungsthemen von „REVERSE ERROR MODELING FOR IMPROVED SEMANTIC SEGMENTATION“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren