DeepCut: Object Segmentation from Bounding Box Annotations Using Convolutional Neural Networks

Martin Rajchl, Matthew C.H. Lee, Ozan Oktay, Konstantinos Kamnitsas, Jonathan Passerat-Palmbach, Wenjia Bai, Mellisa Damodaram, Mary A. Rutherford, Joseph V. Hajnal, Bernhard Kainz, Daniel Rueckert

Research output: Contribution to journalArticlepeer-review

318 Scopus citations

Abstract

In this paper, we propose DeepCut, a method to obtain pixelwise object segmentations given an image dataset labelled weak annotations, in our case bounding boxes. It extends the approach of the well-known GrabCut [1] method to include machine learning by training a neural network classifier from bounding box annotations. We formulate the problem as an energy minimisation problem over a densely-connected conditional random field and iteratively update the training targets to obtain pixelwise object segmentations. Additionally, we propose variants of the DeepCut method and compare those to a naïve approach to CNN training under weak supervision. We test its applicability to solve brain and lung segmentation problems on a challenging fetal magnetic resonance dataset and obtain encouraging results in terms of accuracy.

Original languageEnglish
Article number7739993
Pages (from-to)674-683
Number of pages10
JournalIEEE Transactions on Medical Imaging
Volume36
Issue number2
DOIs
StatePublished - Feb 2017
Externally publishedYes

Keywords

  • Bounding box
  • DeepCut
  • convolutional neural networks
  • image segmentation
  • machine learning
  • weak annotations

Fingerprint

Dive into the research topics of 'DeepCut: Object Segmentation from Bounding Box Annotations Using Convolutional Neural Networks'. Together they form a unique fingerprint.

Cite this