TY - JOUR
T1 - Object-Aware Monocular Depth Prediction with Instance Convolutions
AU - Simsar, Enis
AU - Ornek, Evin Pnar
AU - Manhardt, Fabian
AU - Dhamo, Helisa
AU - Navab, Nassir
AU - Tombari, Federico
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022/4/1
Y1 - 2022/4/1
N2 - With the advent of deep learning, estimating depth from a single RGB image has recently received a lot of attention, being capable of empowering many different applications ranging from path planning for robotics to computational cinematography. Nevertheless,while the depth maps are in their entirety fairly reliable, the estimates around object discontinuities are still far from satisfactory. This can beattributed to the fact that the convolutional operator naturally aggregates features across object discontinuities, resulting in smooth transitions rather than clear boundaries. Therefore, in order to circumvent this issue, we propose a novel convolutional operator which is explicitly tailored to avoid feature aggregation of different object parts. In particular, our method is based on estimating per-part depth values by means of super-pixels. The proposed convolutional operator, which we dub 'Instance Convolution,' then only considers each object part individually on the basis of the estimated super-pixels. Our evaluation with respect to the NYUv2, iBims and KITTI datasets demonstrate the advantages of Instance Convolutions over the classical convolution at estimating depth around occlusion boundaries, while producing comparable results elsewhere. Our code is available at github.com/enisimsar/instance-conv.
AB - With the advent of deep learning, estimating depth from a single RGB image has recently received a lot of attention, being capable of empowering many different applications ranging from path planning for robotics to computational cinematography. Nevertheless,while the depth maps are in their entirety fairly reliable, the estimates around object discontinuities are still far from satisfactory. This can beattributed to the fact that the convolutional operator naturally aggregates features across object discontinuities, resulting in smooth transitions rather than clear boundaries. Therefore, in order to circumvent this issue, we propose a novel convolutional operator which is explicitly tailored to avoid feature aggregation of different object parts. In particular, our method is based on estimating per-part depth values by means of super-pixels. The proposed convolutional operator, which we dub 'Instance Convolution,' then only considers each object part individually on the basis of the estimated super-pixels. Our evaluation with respect to the NYUv2, iBims and KITTI datasets demonstrate the advantages of Instance Convolutions over the classical convolution at estimating depth around occlusion boundaries, while producing comparable results elsewhere. Our code is available at github.com/enisimsar/instance-conv.
KW - Deep learning for visual perception
KW - RGB-D perception
UR - http://www.scopus.com/inward/record.url?scp=85125703060&partnerID=8YFLogxK
U2 - 10.1109/LRA.2022.3155823
DO - 10.1109/LRA.2022.3155823
M3 - Article
AN - SCOPUS:85125703060
SN - 2377-3766
VL - 7
SP - 5389
EP - 5396
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 2
ER -