Abstract
We introduce a new encoder-decoder GAN model, FutureGAN, that predicts future frames of a video sequence conditioned on a sequence of past frames. During training, the networks solely receive the raw pixel values as an input, without relying on additional constraints or dataset specific conditions. To capture both the spatial and temporal components of a video sequence, spatio-temporal 3d convolutions are used in all encoder and decoder modules. Further, we utilize concepts of the existing progressively growing GAN (PGGAN) that achieves high-quality results on generating high-resolution single images. The FutureGAN model extends this concept to the complex task of video prediction. We conducted experiments on three different datasets, MovingMNIST, KTH Action, and Cityscapes. Our results show that the model learned representations to transform the information of an input sequence into a plausible future sequence effectively for all three datasets. The main advantage of the FutureGAN framework is that it is applicable to various different datasets without additional changes, whilst achieving stable results that are competitive to the state-of-the-art in video prediction. The code to reproduce the results of this paper is publicly available at https://github.com/TUM-LMF/FutureGAN.
Original language | English |
---|---|
Pages (from-to) | 3-11 |
Number of pages | 9 |
Journal | International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives |
Volume | 42 |
Issue number | 2/W16 |
DOIs | |
State | Published - 17 Sep 2019 |
Event | 2019 Joint ISPRS Conference on Photogrammetric Image Analysis and Munich Remote Sensing Symposium, PIA 2019+MRSS 2019 - Munich, Germany Duration: 18 Sep 2019 → 20 Sep 2019 |
Keywords
- Deep Learning
- Generative Adversarial Networks
- Generative Modeling
- Video Prediction