TY - GEN
T1 - Learning a Conditional Generative Model for Anatomical Shape Analysis
AU - Gutiérrez-Becker, Benjamín
AU - Wachinger, Christian
N1 - Publisher Copyright:
© 2019, Springer Nature Switzerland AG.
PY - 2019
Y1 - 2019
N2 - We introduce a novel conditional generative model for unsupervised learning of anatomical shapes based on a conditional variational autoencoder (CVAE). Our model is specifically designed to learn latent, low-dimensional shape embeddings from point clouds of large datasets. By using a conditional framework, we are able to introduce side information to the model, leading to accurate reconstructions and providing a mechanism to control the generative process. Our network design provides invariance to similarity transformations and avoids the need to identify point correspondences between shapes. Contrary to previous discriminative approaches based on deep learning, our generative method does not only allow to produce shape descriptors from a point cloud, but also to reconstruct shapes from the embedding. We demonstrate the advantages of this approach by: (i) learning low-dimensional representations of the hippocampus and showing low reconstruction errors when projecting them back to the shape space, and (ii) demonstrating that synthetic point clouds generated by our model capture morphological differences associated to Alzheimer’s disease, to the point that they can be used to train a discriminative model for disease classification.
AB - We introduce a novel conditional generative model for unsupervised learning of anatomical shapes based on a conditional variational autoencoder (CVAE). Our model is specifically designed to learn latent, low-dimensional shape embeddings from point clouds of large datasets. By using a conditional framework, we are able to introduce side information to the model, leading to accurate reconstructions and providing a mechanism to control the generative process. Our network design provides invariance to similarity transformations and avoids the need to identify point correspondences between shapes. Contrary to previous discriminative approaches based on deep learning, our generative method does not only allow to produce shape descriptors from a point cloud, but also to reconstruct shapes from the embedding. We demonstrate the advantages of this approach by: (i) learning low-dimensional representations of the hippocampus and showing low reconstruction errors when projecting them back to the shape space, and (ii) demonstrating that synthetic point clouds generated by our model capture morphological differences associated to Alzheimer’s disease, to the point that they can be used to train a discriminative model for disease classification.
UR - http://www.scopus.com/inward/record.url?scp=85066135651&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-20351-1_39
DO - 10.1007/978-3-030-20351-1_39
M3 - Conference contribution
AN - SCOPUS:85066135651
SN - 9783030203504
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 505
EP - 516
BT - Information Processing in Medical Imaging - 26th International Conference, IPMI 2019, Proceedings
A2 - Chung, Albert C.S.
A2 - Bao, Siqi
A2 - Gee, James C.
A2 - Yushkevich, Paul A.
PB - Springer Verlag
T2 - 26th International Conference on Information Processing in Medical Imaging, IPMI 2019
Y2 - 2 June 2019 through 7 June 2019
ER -