3D Adversarial Augmentations for Robust Out-of-Domain Predictions

Alexander Lehner, Stefano Gasperini, Alvaro Marcos-Ramiro, Michael Schmidt, Nassir Navab, Benjamin Busam, Federico Tombari

Publikation: Beitrag in FachzeitschriftArtikelBegutachtung

Abstract

Since real-world training datasets cannot properly sample the long tail of the underlying data distribution, corner cases and rare out-of-domain samples can severely hinder the performance of state-of-the-art models. This problem becomes even more severe for dense tasks, such as 3D semantic segmentation, where points of non-standard objects can be confidently associated to the wrong class. In this work, we focus on improving the generalization to out-of-domain data. We achieve this by augmenting the training set with adversarial examples. First, we learn a set of vectors that deform the objects in an adversarial fashion. To prevent the adversarial examples from being too far from the existing data distribution, we preserve their plausibility through a series of constraints, ensuring sensor-awareness and shapes smoothness. Then, we perform adversarial augmentation by applying the learned sample-independent vectors to the available objects when training a model. We conduct extensive experiments across a variety of scenarios on data from KITTI, Waymo, and CrashD for 3D object detection, and on data from SemanticKITTI, Waymo, and nuScenes for 3D semantic segmentation. Despite training on a standard single dataset, our approach substantially improves the robustness and generalization of both 3D object detection and 3D semantic segmentation methods to out-of-domain data.

OriginalspracheEnglisch
Seiten (von - bis)931-963
Seitenumfang33
FachzeitschriftInternational Journal of Computer Vision
Jahrgang132
Ausgabenummer3
DOIs
PublikationsstatusVeröffentlicht - März 2024

Fingerprint

Untersuchen Sie die Forschungsthemen von „3D Adversarial Augmentations for Robust Out-of-Domain Predictions“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren