TY - GEN
T1 - Optical see-through calibration with vision-based trackers
T2 - IEEE and ACM International Symposium on Augmented Reality, ISAR 2001
AU - Genc, Y.
AU - Tuceryan, M.
AU - Khamene, A.
AU - Navab, N.
N1 - Publisher Copyright:
© 2001 IEEE.
PY - 2001
Y1 - 2001
N2 - Recently, M. Tuceryan and N. Navab (2000) introduced a method for calibrating an optical see-through system based on the alignment of a set of 2D markers on the display with a single point in the scene, while not restricting the user's head movements (the single point active alignment method or SPAAM). This method is applicable with any tracking system, provided that it gives the pose of the sensor attached to the see-through display. When cameras are used for tracking, one can avoid the computationally intensive and potentially unstable pose estimation process. A vision-based tracker usually consists of a camera attached to the optical see-through display, which observes a set of known features in the scene. From the observed locations of these features, the pose of the camera can be computed. Most pose computation methods are very involved and can be unstable at times. The authors propose to keep the projection matrix for the tracker camera without decomposing it into intrinsic and extrinsic parameters and use it within the SPAAM method directly. The propagation of the projection matrices from the tracker camera to the virtual camera, representing the eye and the optical see-through display combination as a pinhole camera model, allows us to skip the most time consuming and potentially unstable step of registration, namely, estimating the pose of the tracker camera.
AB - Recently, M. Tuceryan and N. Navab (2000) introduced a method for calibrating an optical see-through system based on the alignment of a set of 2D markers on the display with a single point in the scene, while not restricting the user's head movements (the single point active alignment method or SPAAM). This method is applicable with any tracking system, provided that it gives the pose of the sensor attached to the see-through display. When cameras are used for tracking, one can avoid the computationally intensive and potentially unstable pose estimation process. A vision-based tracker usually consists of a camera attached to the optical see-through display, which observes a set of known features in the scene. From the observed locations of these features, the pose of the camera can be computed. Most pose computation methods are very involved and can be unstable at times. The authors propose to keep the projection matrix for the tracker camera without decomposing it into intrinsic and extrinsic parameters and use it within the SPAAM method directly. The propagation of the projection matrices from the tracker camera to the virtual camera, representing the eye and the optical see-through display combination as a pinhole camera model, allows us to skip the most time consuming and potentially unstable step of registration, namely, estimating the pose of the tracker camera.
UR - http://www.scopus.com/inward/record.url?scp=65349136252&partnerID=8YFLogxK
U2 - 10.1109/ISAR.2001.970524
DO - 10.1109/ISAR.2001.970524
M3 - Conference contribution
AN - SCOPUS:65349136252
T3 - Proceedings - IEEE and ACM International Symposium on Augmented Reality, ISAR 2001
SP - 147
EP - 156
BT - Proceedings - IEEE and ACM International Symposium on Augmented Reality, ISAR 2001
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 29 October 2001 through 30 October 2001
ER -