Abstract
In this paper we present a novel approach for the calibration of video cameras in an augmented reality image-guided surgery system. Whereas most calibration algorithms rely on the extraction of features such as points or lines, our proposed calibration algorithm determines the intrinsic and extrinsic camera calibration parameters by maximising the similarity between the real view of a calibration object and the synthetic view of the same calibration object. Our new method offers a number of advantages over existing calibration techniques: First, our calibration algorithm does not require the identification of fiducials such as points or lines in the video images. As a result of this the algorithm does require any feature extraction such as a corner or edge detection. Instead the algorithm uses the image intensities directly. Second, the calibration algorithm is model-based, which means that different camera or lens distortion models can be easily integrated into the algorithm. We have applied the calibration algorithm for the calibration of a head-mounted augmented reality system for image-guided neurosurgery. Our results show that the proposed calibration algorithm can lead to improved accuracy compared to conventional feature-based calibration techniques.
Original language | English |
---|---|
Pages (from-to) | 463-471 |
Number of pages | 9 |
Journal | Proceedings of SPIE - The International Society for Optical Engineering |
Volume | 4681 |
Issue number | 1 |
DOIs | |
State | Published - 16 May 2002 |
Externally published | Yes |
Keywords
- 2D/3D registration
- Augmented reality
- Camera calibration
- Image-guided surgery
- Mutual information