Dynamical statistical shape priors for level set based sequence segmentation

Daniel Cremers, Gareth Funka-Lea

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

12 Scopus citations

Abstract

In recent years, researchers have proposed to introduce statistical shape knowledge into the level set method in order to cope with insufficient low-level information. While these priors were shown to drastically improve the segmentation of images or image sequences, so far the focus has been on statistical shape priors that are time-invariant. Yet, in the context of tracking deformable objects, it is clear that certain silhouettes may become more or less likely over time. In this paper, we tackle the challenge of learning dynamical statistical models for implicitly represented shapes. We show how these can be integrated into a segmentation process in a Bayesian framework for image sequence segmentation. Experiments demonstrate that such shape priors with memory can drastically improve the segmentation of image sequences.

Original languageEnglish
Title of host publicationVariational, Geometric, and Level Set Methods in Computer Vision - Third International Workshop, VLSM 2005, Proceedings
Pages210-221
Number of pages12
DOIs
StatePublished - 2005
Externally publishedYes
Event3rd International Workshop on Variational, Geometric, and Level Set Methods in Computer Vision, VLSM 2005 - Beijing, China
Duration: 16 Oct 200516 Oct 2005

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume3752 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference3rd International Workshop on Variational, Geometric, and Level Set Methods in Computer Vision, VLSM 2005
Country/TerritoryChina
CityBeijing
Period16/10/0516/10/05

Fingerprint

Dive into the research topics of 'Dynamical statistical shape priors for level set based sequence segmentation'. Together they form a unique fingerprint.

Cite this