DeepCut: Object Segmentation from Bounding Box Annotations Using Convolutional Neural Networks

Martin Rajchl, Matthew C.H. Lee, Ozan Oktay, Konstantinos Kamnitsas, Jonathan Passerat-Palmbach, Wenjia Bai, Mellisa Damodaram, Mary A. Rutherford, Joseph V. Hajnal, Bernhard Kainz, Daniel Rueckert

Publikation: Beitrag in FachzeitschriftArtikelBegutachtung

320 Zitate (Scopus)

Abstract

In this paper, we propose DeepCut, a method to obtain pixelwise object segmentations given an image dataset labelled weak annotations, in our case bounding boxes. It extends the approach of the well-known GrabCut [1] method to include machine learning by training a neural network classifier from bounding box annotations. We formulate the problem as an energy minimisation problem over a densely-connected conditional random field and iteratively update the training targets to obtain pixelwise object segmentations. Additionally, we propose variants of the DeepCut method and compare those to a naïve approach to CNN training under weak supervision. We test its applicability to solve brain and lung segmentation problems on a challenging fetal magnetic resonance dataset and obtain encouraging results in terms of accuracy.

OriginalspracheEnglisch
Aufsatznummer7739993
Seiten (von - bis)674-683
Seitenumfang10
FachzeitschriftIEEE Transactions on Medical Imaging
Jahrgang36
Ausgabenummer2
DOIs
PublikationsstatusVeröffentlicht - Feb. 2017
Extern publiziertJa

Fingerprint

Untersuchen Sie die Forschungsthemen von „DeepCut: Object Segmentation from Bounding Box Annotations Using Convolutional Neural Networks“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren