TY - GEN
T1 - Marker-less tracking for AR
T2 - 2002 International Symposium on Mixed and Augmented Reality, ISMAR 2002
AU - Genc, Y.
AU - Riedel, S.
AU - Souvannavong, F.
AU - Akinlar, C.
AU - Navab, N.
N1 - Publisher Copyright:
© 2002 IEEE.
PY - 2002
Y1 - 2002
N2 - Estimating the pose of a camera (virtual or real) in which some augmentation takes place is one of the most important parts of an augmented reality (AR) system. The availability of powerful processors and fast frame grabbers has made the use of vision-based trackers commonplace due to their accuracy as well as flexibility and ease of use. Current vision-based trackers are based on tracking of markers. The use of markers increases robustness and reduces computational requirements. However, their use can be very complicated, as they require maintenance. Direct use of scene features for tracking, therefore, is desirable. To this end, we describe a general system that tracks the position and orientation of a camera observing a scene without visual markers. Our method is based on a two-stage process. In the first stage, a set of features is learned with the help of an external tracking system during use. The second stage uses these learned features for camera tracking when the system in the first stage decides that it is possible to do so. The system is very general so that it can employ any available feature tracking and pose estimation system for learning and tracking. We experimentally demonstrate the viability of the method in real-life examples.
AB - Estimating the pose of a camera (virtual or real) in which some augmentation takes place is one of the most important parts of an augmented reality (AR) system. The availability of powerful processors and fast frame grabbers has made the use of vision-based trackers commonplace due to their accuracy as well as flexibility and ease of use. Current vision-based trackers are based on tracking of markers. The use of markers increases robustness and reduces computational requirements. However, their use can be very complicated, as they require maintenance. Direct use of scene features for tracking, therefore, is desirable. To this end, we describe a general system that tracks the position and orientation of a camera observing a scene without visual markers. Our method is based on a two-stage process. In the first stage, a set of features is learned with the help of an external tracking system during use. The second stage uses these learned features for camera tracking when the system in the first stage decides that it is possible to do so. The system is very general so that it can employ any available feature tracking and pose estimation system for learning and tracking. We experimentally demonstrate the viability of the method in real-life examples.
UR - http://www.scopus.com/inward/record.url?scp=84911456432&partnerID=8YFLogxK
U2 - 10.1109/ISMAR.2002.1115122
DO - 10.1109/ISMAR.2002.1115122
M3 - Conference contribution
AN - SCOPUS:84911456432
T3 - Proceedings - International Symposium on Mixed and Augmented Reality, ISMAR 2002
SP - 295
EP - 304
BT - Proceedings - International Symposium on Mixed and Augmented Reality, ISMAR 2002
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 30 September 2002 through 1 October 2002
ER -