TY - GEN
T1 - Self-supervised Probe Pose Regression via Optimized Ultrasound Representations for US-CT Fusion
AU - Azampour, Mohammad Farid
AU - Velikova, Yordanka
AU - Fatemizadeh, Emad
AU - Dakua, Sarada Prasad
AU - Navab, Nassir
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024.
PY - 2024
Y1 - 2024
N2 - Aligning 2D ultrasound images with 3D CT scans of the liver holds significant clinical value in enhancing diagnostic precision, surgical planning, and treatment delivery. Conventional approaches primarily rely on optimization techniques, which often have a limited capture range and are susceptible to initialization errors. To address these limitations, we define the problem as “probe pose regression” and leverage deep learning for a more robust and efficient solution for liver US-CT registration without access to paired data. The proposed method is a three-part framework that combines ultrasound rendering, generative model and pose regression. In the first stage, we exploit a differentiable ultrasound rendering model designed to synthesize ultrasound images given segmentation labels. We let the downstream task optimize the rendering parameters, enhancing the performance of the overall method. In the second stage, a generative model bridges the gap between real and rendered ultrasound images, enabling application on real B-mode images. Finally, we use a patient-specific pose regression network, trained self-supervised with only synthetic images and their known poses. We use ultrasound, and CT scans from a dual-modality human abdomen phantom to validate the proposed method. Our experimental results indicate that the proposed method can estimate probe poses within an acceptable error margin, which can later be fine-tuned using conventional methods. This capability confirms that the proposed framework can serve as a reliable initialization step for US-CT fusion and achieve fully automated US-CT fusion when coupled with conventional methods. The code and the dataset are available at https://github.com/mfazampour/SS_Probe_Pose_Regression.
AB - Aligning 2D ultrasound images with 3D CT scans of the liver holds significant clinical value in enhancing diagnostic precision, surgical planning, and treatment delivery. Conventional approaches primarily rely on optimization techniques, which often have a limited capture range and are susceptible to initialization errors. To address these limitations, we define the problem as “probe pose regression” and leverage deep learning for a more robust and efficient solution for liver US-CT registration without access to paired data. The proposed method is a three-part framework that combines ultrasound rendering, generative model and pose regression. In the first stage, we exploit a differentiable ultrasound rendering model designed to synthesize ultrasound images given segmentation labels. We let the downstream task optimize the rendering parameters, enhancing the performance of the overall method. In the second stage, a generative model bridges the gap between real and rendered ultrasound images, enabling application on real B-mode images. Finally, we use a patient-specific pose regression network, trained self-supervised with only synthetic images and their known poses. We use ultrasound, and CT scans from a dual-modality human abdomen phantom to validate the proposed method. Our experimental results indicate that the proposed method can estimate probe poses within an acceptable error margin, which can later be fine-tuned using conventional methods. This capability confirms that the proposed framework can serve as a reliable initialization step for US-CT fusion and achieve fully automated US-CT fusion when coupled with conventional methods. The code and the dataset are available at https://github.com/mfazampour/SS_Probe_Pose_Regression.
KW - DL-based pose regression
KW - Deep generative models
KW - Image registration
KW - US-CT fusion
UR - http://www.scopus.com/inward/record.url?scp=85188664844&partnerID=8YFLogxK
U2 - 10.1007/978-981-97-1335-6_11
DO - 10.1007/978-981-97-1335-6_11
M3 - Conference contribution
AN - SCOPUS:85188664844
SN - 9789819713349
T3 - Lecture Notes in Electrical Engineering
SP - 111
EP - 121
BT - Proceedings of 2023 International Conference on Medical Imaging and Computer-Aided Diagnosis (MICAD 2023) - Medical Imaging and Computer-Aided Diagnosis
A2 - Su, Ruidan
A2 - Zhang, Yu-Dong
A2 - Frangi, Alejandro F.
PB - Springer Science and Business Media Deutschland GmbH
T2 - International Conference on Medical Imaging and Computer-Aided Diagnosis, MICAD 2023
Y2 - 9 December 2023 through 10 December 2023
ER -