TY - GEN
T1 - Micro-Ct synthesis and inner ear super resolution via generative adversarial networks and bayesian inference
AU - Li, Hongwei
AU - Prasad, Rameshwara G.N.
AU - Sekuboyina, Anjany
AU - Niu, Chen
AU - Bai, Siwei
AU - Hemmert, Werner
AU - Menze, Bjoern
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/4/13
Y1 - 2021/4/13
N2 - Existing medical image super-resolution methods rely on pairs of low-and high-resolution images to learn a mapping in a fully supervised manner. However, such image pairs are often not available in clinical practice. In this paper, we address super resolution problem in a real-world scenario using unpaired data and synthesize linearly eight times higher resolved Micro-CT images of temporal bone structure embedded in the inner ear. We explore cycle-consistency generative adversarial networks for super-resolution and equip the model with Bayesian inference. We further introduce Hu Moments distance as the evaluation metric to quantify the shape of the temporal bone. We evaluate our method on a public inner ear CT dataset and have seen both visual and quantitative improvement over state-of-the-art supervised deep-learning based methods. Further, we conduct a multi-rater visual evaluation experiment and find that three inner-ear researchers consistently rate our method highest quality scores among three methods. Furthermore, we are able to quantify uncertainty in the unpaired translation task and the uncertainty map can provide structural information of the temporal bone.
AB - Existing medical image super-resolution methods rely on pairs of low-and high-resolution images to learn a mapping in a fully supervised manner. However, such image pairs are often not available in clinical practice. In this paper, we address super resolution problem in a real-world scenario using unpaired data and synthesize linearly eight times higher resolved Micro-CT images of temporal bone structure embedded in the inner ear. We explore cycle-consistency generative adversarial networks for super-resolution and equip the model with Bayesian inference. We further introduce Hu Moments distance as the evaluation metric to quantify the shape of the temporal bone. We evaluate our method on a public inner ear CT dataset and have seen both visual and quantitative improvement over state-of-the-art supervised deep-learning based methods. Further, we conduct a multi-rater visual evaluation experiment and find that three inner-ear researchers consistently rate our method highest quality scores among three methods. Furthermore, we are able to quantify uncertainty in the unpaired translation task and the uncertainty map can provide structural information of the temporal bone.
UR - http://www.scopus.com/inward/record.url?scp=85107199725&partnerID=8YFLogxK
U2 - 10.1109/ISBI48211.2021.9434061
DO - 10.1109/ISBI48211.2021.9434061
M3 - Conference contribution
AN - SCOPUS:85107199725
T3 - Proceedings - International Symposium on Biomedical Imaging
SP - 1500
EP - 1504
BT - 2021 IEEE 18th International Symposium on Biomedical Imaging, ISBI 2021
PB - IEEE Computer Society
T2 - 18th IEEE International Symposium on Biomedical Imaging, ISBI 2021
Y2 - 13 April 2021 through 16 April 2021
ER -