TY - GEN
T1 - To Adapt or Not to Adapt? Real-Time Adaptation for Semantic Segmentation
AU - Colomer, Marc Botet
AU - Luigi Dovesi, Pier
AU - Panagiotakopoulos, Theodoros
AU - Carvalho, Joao Frederico
AU - Härenstam-Nielsen, Linus
AU - Azizpour, Hossein
AU - Kjellström, Hedvig
AU - Cremers, Daniel
AU - Poggi, Matteo
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - The goal of Online Domain Adaptation for semantic segmentation is to handle unforeseeable domain changes that occur during deployment, like sudden weather events. However, the high computational costs associated with brute-force adaptation make this paradigm unfeasible for real-world applications. In this paper we propose HAMLET, a Hardware-Aware Modular Least Expensive Training framework for real-time domain adaptation. Our approach includes a hardware-aware back-propagation orchestration agent (HAMT) and a dedicated domain-shift detector that enables active control over when and how the model is adapted (LT). Thanks to these advancements, our approach is capable of performing semantic segmentation while simultaneously adapting at more than 29FPS on a single consumer-grade GPU. Our framework's encouraging accuracy and speed trade-off is demonstrated on OnDA and SHIFT benchmarks through experimental results.
AB - The goal of Online Domain Adaptation for semantic segmentation is to handle unforeseeable domain changes that occur during deployment, like sudden weather events. However, the high computational costs associated with brute-force adaptation make this paradigm unfeasible for real-world applications. In this paper we propose HAMLET, a Hardware-Aware Modular Least Expensive Training framework for real-time domain adaptation. Our approach includes a hardware-aware back-propagation orchestration agent (HAMT) and a dedicated domain-shift detector that enables active control over when and how the model is adapted (LT). Thanks to these advancements, our approach is capable of performing semantic segmentation while simultaneously adapting at more than 29FPS on a single consumer-grade GPU. Our framework's encouraging accuracy and speed trade-off is demonstrated on OnDA and SHIFT benchmarks through experimental results.
UR - http://www.scopus.com/inward/record.url?scp=85188276872&partnerID=8YFLogxK
U2 - 10.1109/ICCV51070.2023.01517
DO - 10.1109/ICCV51070.2023.01517
M3 - Conference contribution
AN - SCOPUS:85188276872
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 16502
EP - 16513
BT - Proceedings - 2023 IEEE/CVF International Conference on Computer Vision, ICCV 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2023 IEEE/CVF International Conference on Computer Vision, ICCV 2023
Y2 - 2 October 2023 through 6 October 2023
ER -