A deep convolutional neural network for video sequence background subtraction

Mohammadreza Babaee, Duc Tung Dinh, Gerhard Rigoll

Publikation: Beitrag in FachzeitschriftArtikelBegutachtung

329 Zitate (Scopus)

Abstract

In this work, we present a novel background subtraction from video sequences algorithm that uses a deep Convolutional Neural Network (CNN) to perform the segmentation. With this approach, feature engineering and parameter tuning become unnecessary since the network parameters can be learned from data by training a single CNN that can handle various video scenes. Additionally, we propose a new approach to estimate background model from video sequences. For the training of the CNN, we employed randomly 5% video frames and their ground truth segmentations taken from the Change Detection challenge 2014 (CDnet 2014). We also utilized spatial-median filtering as the post-processing of the network outputs. Our method is evaluated with different data-sets, and it (so-called DeepBS) outperforms the existing algorithms with respect to the average ranking over different evaluation metrics announced in CDnet 2014. Furthermore, due to the network architecture, our CNN is capable of real time processing.

OriginalspracheEnglisch
Seiten (von - bis)635-649
Seitenumfang15
FachzeitschriftPattern Recognition
Jahrgang76
DOIs
PublikationsstatusVeröffentlicht - Apr. 2018

Fingerprint

Untersuchen Sie die Forschungsthemen von „A deep convolutional neural network for video sequence background subtraction“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren