Automated segmentation of the human supraclavicular fat depot via deep neural network in water-fat separated magnetic resonance images

Yu Zhao, Chunmeng Tang, Bihao Cui, Arun Somasundaram, Johannes Raspe, Xiaobin Hu, Christina Holzapfel, Daniela Junker, Hans Hauner, Bjoern Menze, Mingming Wu, Dimitrios Karampinos

Research output: Contribution to journalArticlepeer-review

Abstract

Background: Human brown adipose tissue (BAT), mostly located in the cervical/supraclavicular region, is a promising target in obesity treatment. Magnetic resonance imaging (MRI) allows for mapping the fat content quantitatively. However, due to the complex heterogeneous distribution of BAT, it has been difficult to establish a standardized segmentation routine based on magnetic resonance (MR) images. Here, we suggest using a multi-modal deep neural network to detect the supraclavicular fat pocket. Methods: A total of 50 healthy subjects [median age/body mass index (BMI) =36 years/24.3 kg/m2] underwent MRI scans of the neck region on a 3 T Ingenia scanner (Philips Healthcare, Best, Netherlands). Manual segmentations following fixed rules for anatomical borders were used as ground truth labels. A deep learning-based method (termed as BAT-Net) was proposed for the segmentation of BAT on MRI scans. It jointly leveraged two-dimensional (2D) and three-dimensional (3D) convolutional neural network (CNN) architectures to efficiently encode the multi-modal and 3D context information from multi-modal MRI scans of the supraclavicular region. We compared the performance of BAT-Net to that of 2D U-Net and 3D U-Net. For 2D U-Net, we analyzed the performance difference of implementing 2D U-Net in three different planes, denoted as 2D U-Net (axial), 2D U-Net (coronal), and 2D U-Net (sagittal). Results: The proposed model achieved an average dice similarity coefficient (DSC) of 0.878 with a standard deviation of 0.020. The volume segmented by the network was smaller compared to the ground truth labels by 9.20 mL on average with a mean absolute increase in proton density fat fraction (PDFF) inside the segmented regions of 1.19 percentage points. The BAT-Net outperformed all implemented 2D U-Nets and the 3D U-Nets with average DSC enhancement ranging from 0.016 to 0.023. Conclusions: The current work integrates a deep neural network-based segmentation into the automated segmentation of supraclavicular fat depot for quantitative evaluation of BAT. Experiments show that the presented multi-modal method benefits from leveraging both 2D and 3D CNN architecture and outperforms the independent use of 2D or 3D networks. Deep learning-based segmentation methods show potential towards a fully automated segmentation of the supraclavicular fat depot.

Original languageEnglish
Pages (from-to)4699-4715
Number of pages17
JournalQuantitative Imaging in Medicine and Surgery
Volume13
Issue number7
DOIs
StatePublished - Jul 2023

Keywords

  • Human brown adipose tissue (human BAT)
  • automated medical image segmentation
  • convolutional neural network (CNN)
  • deep neural network

Fingerprint

Dive into the research topics of 'Automated segmentation of the human supraclavicular fat depot via deep neural network in water-fat separated magnetic resonance images'. Together they form a unique fingerprint.

Cite this