Self-supervised Probe Pose Regression via Optimized Ultrasound Representations for US-CT Fusion

Mohammad Farid Azampour, Yordanka Velikova, Emad Fatemizadeh, Sarada Prasad Dakua, Nassir Navab

Publikation: Beitrag in Buch/Bericht/KonferenzbandKonferenzbeitragBegutachtung

Abstract

Aligning 2D ultrasound images with 3D CT scans of the liver holds significant clinical value in enhancing diagnostic precision, surgical planning, and treatment delivery. Conventional approaches primarily rely on optimization techniques, which often have a limited capture range and are susceptible to initialization errors. To address these limitations, we define the problem as “probe pose regression” and leverage deep learning for a more robust and efficient solution for liver US-CT registration without access to paired data. The proposed method is a three-part framework that combines ultrasound rendering, generative model and pose regression. In the first stage, we exploit a differentiable ultrasound rendering model designed to synthesize ultrasound images given segmentation labels. We let the downstream task optimize the rendering parameters, enhancing the performance of the overall method. In the second stage, a generative model bridges the gap between real and rendered ultrasound images, enabling application on real B-mode images. Finally, we use a patient-specific pose regression network, trained self-supervised with only synthetic images and their known poses. We use ultrasound, and CT scans from a dual-modality human abdomen phantom to validate the proposed method. Our experimental results indicate that the proposed method can estimate probe poses within an acceptable error margin, which can later be fine-tuned using conventional methods. This capability confirms that the proposed framework can serve as a reliable initialization step for US-CT fusion and achieve fully automated US-CT fusion when coupled with conventional methods. The code and the dataset are available at https://github.com/mfazampour/SS_Probe_Pose_Regression.

OriginalspracheEnglisch
TitelProceedings of 2023 International Conference on Medical Imaging and Computer-Aided Diagnosis (MICAD 2023) - Medical Imaging and Computer-Aided Diagnosis
Redakteure/-innenRuidan Su, Yu-Dong Zhang, Alejandro F. Frangi
Herausgeber (Verlag)Springer Science and Business Media Deutschland GmbH
Seiten111-121
Seitenumfang11
ISBN (Print)9789819713349
DOIs
PublikationsstatusVeröffentlicht - 2024
VeranstaltungInternational Conference on Medical Imaging and Computer-Aided Diagnosis, MICAD 2023 - Cambridge, Großbritannien/Vereinigtes Königreich
Dauer: 9 Dez. 202310 Dez. 2023

Publikationsreihe

NameLecture Notes in Electrical Engineering
Band1166 LNEE
ISSN (Print)1876-1100
ISSN (elektronisch)1876-1119

Konferenz

KonferenzInternational Conference on Medical Imaging and Computer-Aided Diagnosis, MICAD 2023
Land/GebietGroßbritannien/Vereinigtes Königreich
OrtCambridge
Zeitraum9/12/2310/12/23

Fingerprint

Untersuchen Sie die Forschungsthemen von „Self-supervised Probe Pose Regression via Optimized Ultrasound Representations for US-CT Fusion“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren