Deep Reinforcement Learning for Organ Localization in CT

Fernando Navarro, Anjany Sekuboyina, Diana Waldmannstetter, Jan C. Peeken, Stephanie E. Combs, Bjoern H. Menze

Research output: Contribution to journalConference articlepeer-review

30 Scopus citations

Abstract

Robust localization of organs in computed tomography scans is a constant pre-processing requirement for organ-specific image retrieval, radiotherapy planning, and interventional image analysis. In contrast to current solutions based on exhaustive search or region proposals, which require large amounts of annotated data, we propose a deep reinforcement learning approach for organ localization in CT. In this work, an artificial agent is actively self-taught to localize organs in CT by learning from its asserts and mistakes. Within the context of reinforcement learning, we propose a novel set of actions tailored for organ localization in CT. Our method can use as a plug-and-play module for localizing any organ of interest. We evaluate the proposed solution on the public VISCERAL dataset containing CT scans with varying fields of view and multiple organs. We achieved an overall intersection over union of 0.63, an absolute median wall distance of 2.25 mm and a median distance between centroids of 3.65 mm.

Original languageEnglish
Pages (from-to)544-554
Number of pages11
JournalProceedings of Machine Learning Research
Volume121
StatePublished - 2020
Event3rd Conference on Medical Imaging with Deep Learning, MIDL 2020 - Virtual, Online, Canada
Duration: 6 Jul 20208 Jul 2020

Keywords

  • Organ localization
  • computed tomography
  • deep reinforcement learning

Fingerprint

Dive into the research topics of 'Deep Reinforcement Learning for Organ Localization in CT'. Together they form a unique fingerprint.

Cite this