Skip to main navigation Skip to search Skip to main content

Multiclassifier fusion in human brain MR segmentation: Modelling convergence

  • Rolf A. Heckemann
  • , Joseph V. Hajnal
  • , Paul Aljabar
  • , Daniel Rueckert
  • , Alexander Hammers

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

10 Scopus citations

Abstract

Segmentations of MR images of the human brain can be generated by propagating an existing atlas label volume to the target image. By fusing multiple propagated label volumes, the segmentation can be improved. We developed a model that predicts the improvement of labelling accuracy and precision based on the number of segmentations used as input. Using a cross-validation study on brain image data as well as numerical simulations, we verified the model. Fit parameters of this model are potential indicators of the quality of a given label propagation method or the consistency of the input segmentations used.

Original languageEnglish
Title of host publicationMedical Image Computing and Computer-Assisted Intervention, MICCAI 2006 - 9th International Conference, Proceedings
PublisherSpringer Verlag
Pages815-822
Number of pages8
ISBN (Print)354044727X, 9783540447276
DOIs
StatePublished - 2006
Externally publishedYes
Event9th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2006 - Copenhagen, Denmark
Duration: 1 Oct 20066 Oct 2006

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume4191 LNCS - II
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference9th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2006
Country/TerritoryDenmark
CityCopenhagen
Period1/10/066/10/06

Fingerprint

Dive into the research topics of 'Multiclassifier fusion in human brain MR segmentation: Modelling convergence'. Together they form a unique fingerprint.

Cite this