TY - GEN
T1 - Deeper depth prediction with fully convolutional residual networks
AU - Laina, Iro
AU - Rupprecht, Christian
AU - Belagiannis, Vasileios
AU - Tombari, Federico
AU - Navab, Nassir
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2016/12/15
Y1 - 2016/12/15
N2 - This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available.
AB - This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available.
KW - CNN
KW - Depth prediction
UR - http://www.scopus.com/inward/record.url?scp=85011317214&partnerID=8YFLogxK
U2 - 10.1109/3DV.2016.32
DO - 10.1109/3DV.2016.32
M3 - Conference contribution
AN - SCOPUS:85011317214
T3 - Proceedings - 2016 4th International Conference on 3D Vision, 3DV 2016
SP - 239
EP - 248
BT - Proceedings - 2016 4th International Conference on 3D Vision, 3DV 2016
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 4th International Conference on 3D Vision, 3DV 2016
Y2 - 25 October 2016 through 28 October 2016
ER -