TY - GEN
T1 - Model-based tracking at 300hz using raw time-of-flight observations
AU - Stuhmer, Jan
AU - Nowozin, Sebastian
AU - Fitzgibbon, Andrew
AU - Szeliski, Richard
AU - Perry, Travis
AU - Acharya, Sunil
AU - Cremers, Daniel
AU - Shotton, Jamie
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2015/2/17
Y1 - 2015/2/17
N2 - Consumer depth cameras have dramatically improved our ability to track rigid, articulated, and deformable 3D objects in real-time. However, depth cameras have a limited temporal resolution (frame-rate) that restricts the accuracy and robustness of tracking, especially for fast or unpredictable motion. In this paper, we show how to perform model-based object tracking which allows to reconstruct the object's depth at an order of magnitude higher frame-rate through simple modifications to an off-the-shelf depth camera. We focus on phase-based time-of-flight (ToF) sensing, which reconstructs each low frame-rate depth image from a set of short exposure 'raw' infrared captures. These raw captures are taken in quick succession near the beginning of each depth frame, and differ in the modulation of their active illumination. We make two contributions. First, we detail how to perform model-based tracking against these raw captures. Second, we show that by reprogramming the camera to space the raw captures uniformly in time, we obtain a 10x higher frame-rate, and thereby improve the ability to track fast-moving objects.
AB - Consumer depth cameras have dramatically improved our ability to track rigid, articulated, and deformable 3D objects in real-time. However, depth cameras have a limited temporal resolution (frame-rate) that restricts the accuracy and robustness of tracking, especially for fast or unpredictable motion. In this paper, we show how to perform model-based object tracking which allows to reconstruct the object's depth at an order of magnitude higher frame-rate through simple modifications to an off-the-shelf depth camera. We focus on phase-based time-of-flight (ToF) sensing, which reconstructs each low frame-rate depth image from a set of short exposure 'raw' infrared captures. These raw captures are taken in quick succession near the beginning of each depth frame, and differ in the modulation of their active illumination. We make two contributions. First, we detail how to perform model-based tracking against these raw captures. Second, we show that by reprogramming the camera to space the raw captures uniformly in time, we obtain a 10x higher frame-rate, and thereby improve the ability to track fast-moving objects.
UR - https://www.scopus.com/pages/publications/84973868376
U2 - 10.1109/ICCV.2015.408
DO - 10.1109/ICCV.2015.408
M3 - Conference contribution
AN - SCOPUS:84973868376
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 3577
EP - 3585
BT - 2015 International Conference on Computer Vision, ICCV 2015
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 15th IEEE International Conference on Computer Vision, ICCV 2015
Y2 - 11 December 2015 through 18 December 2015
ER -