TY - GEN
T1 - Generating X-ray Images from Point Clouds Using Conditional Generative Adversarial Networks
AU - Haiderbhai, Mustafa
AU - Ledesma, Sergio
AU - Navab, Nassir
AU - Fallavollita, Pascal
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/7
Y1 - 2020/7
N2 - Simulating medical images such as X-rays is of key interest to reduce radiation in non-diagnostic visualization scenarios. Past state of the art methods utilize ray tracing, which is reliant on 3D models. To our knowledge, no approach exists for cases where point clouds from depth cameras and other sensors are the only input modality. We propose a method for estimating an X-ray image from a generic point cloud using a conditional generative adversarial network (CGAN). We train a CGAN pix2pix to translate point cloud images into X-ray images using a dataset created inside our custom synthetic data generator. Additionally, point clouds of multiple densities are examined to determine the effect of density on the image translation problem. The results from the CGAN show that this type of network can predict X-ray images from points clouds. Higher point cloud densities outperformed the two lowest point cloud densities. However, the networks trained with high-density point clouds did not exhibit a significant difference when compared with the networks trained with medium densities. We prove that CGANs can be applied to image translation problems in the medical domain and show the feasibility of using this approach when 3D models are not available. Further work includes overcoming the occlusion and quality limitations of the generic approach and applying CGANs to other medical image translation problems.
AB - Simulating medical images such as X-rays is of key interest to reduce radiation in non-diagnostic visualization scenarios. Past state of the art methods utilize ray tracing, which is reliant on 3D models. To our knowledge, no approach exists for cases where point clouds from depth cameras and other sensors are the only input modality. We propose a method for estimating an X-ray image from a generic point cloud using a conditional generative adversarial network (CGAN). We train a CGAN pix2pix to translate point cloud images into X-ray images using a dataset created inside our custom synthetic data generator. Additionally, point clouds of multiple densities are examined to determine the effect of density on the image translation problem. The results from the CGAN show that this type of network can predict X-ray images from points clouds. Higher point cloud densities outperformed the two lowest point cloud densities. However, the networks trained with high-density point clouds did not exhibit a significant difference when compared with the networks trained with medium densities. We prove that CGANs can be applied to image translation problems in the medical domain and show the feasibility of using this approach when 3D models are not available. Further work includes overcoming the occlusion and quality limitations of the generic approach and applying CGANs to other medical image translation problems.
UR - http://www.scopus.com/inward/record.url?scp=85090999370&partnerID=8YFLogxK
U2 - 10.1109/EMBC44109.2020.9175420
DO - 10.1109/EMBC44109.2020.9175420
M3 - Conference contribution
AN - SCOPUS:85090999370
T3 - Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS
SP - 1588
EP - 1591
BT - 42nd Annual International Conferences of the IEEE Engineering in Medicine and Biology Society
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 42nd Annual International Conferences of the IEEE Engineering in Medicine and Biology Society, EMBC 2020
Y2 - 20 July 2020 through 24 July 2020
ER -