Depth-adaptive supervoxels for RGB-D video segmentation

David Weikersdorfer, Alexander Schick, Daniel Cremers

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

16 Scopus citations

Abstract

In this paper we present a method for automatic video segmentation of RGB-D video streams provided by combined colour and depth sensors like the Microsoft Kinect. To this end, we combine position and normal information from the depth sensor with colour information to compute temporally stable, depth-adaptive superpixels and combine them into a graph of strand-like spatiotemporal, depth-adaptive supervoxels. We use spectral graph clustering on the supervoxel graph to partition it into spatiotemporal segments. Experimental evaluation on several challenging scenarios demonstrates that our two-layer RGB-D video segmentation technique produces excellent video segmentation results.

Original languageEnglish
Title of host publication2013 IEEE International Conference on Image Processing, ICIP 2013 - Proceedings
PublisherIEEE Computer Society
Pages2708-2712
Number of pages5
ISBN (Print)9781479923410
DOIs
StatePublished - 2013
Event2013 20th IEEE International Conference on Image Processing, ICIP 2013 - Melbourne, VIC, Australia
Duration: 15 Sep 201318 Sep 2013

Publication series

Name2013 IEEE International Conference on Image Processing, ICIP 2013 - Proceedings

Conference

Conference2013 20th IEEE International Conference on Image Processing, ICIP 2013
Country/TerritoryAustralia
CityMelbourne, VIC
Period15/09/1318/09/13

Keywords

  • RGB-D
  • Superpixels
  • Supervoxels
  • Video Analysis
  • Video Segmentation

Fingerprint

Dive into the research topics of 'Depth-adaptive supervoxels for RGB-D video segmentation'. Together they form a unique fingerprint.

Cite this