Video Summarization Through Reinforcement Learning with a 3D Spatiooral U-Net

  • Tianrui Liu
  • , Qingjie Meng
  • , Jun Jie Huang
  • , Athanasios Vlontzos
  • , Daniel Rueckert
  • , Bernhard Kainz

Research output: Contribution to journalArticlepeer-review

99 Scopus citations

Abstract

Intelligent video summarization algorithms allow to quickly convey the most relevant information in videos through the identification of the most essential and explanatory content while removing redundant video frames. In this paper, we introduce the 3DST-UNet-RL framework for video summarization. A 3D spatiooral U-Net is used to efficiently encode spatiooral information of the input videos for downstream reinforcement learning (RL). An RL agent learns from spatiooral latent scores and predicts actions for keeping or rejecting a video frame in a video summary. We investigate if real/inflated 3D spatiooral CNN features are better suited to learn representations from videos than commonly used 2D image features. Our framework can operate in both, a fully unsupervised mode and a supervised training mode. We analyse the impact of prescribed summary lengths and show experimental evidence for the effectiveness of 3DST-UNet-RL on two commonly used general video summarization benchmarks. We also applied our method on a medical video summarization task. The proposed video summarization method has the potential to save storage costs of ultrasound screening videos as well as to increase efficiency when browsing patient video data during retrospective analysis or audit without loosing essential information.

Original languageEnglish
Pages (from-to)1573-1586
Number of pages14
JournalIEEE Transactions on Image Processing
Volume31
DOIs
StatePublished - 2022

Keywords

  • 3D U-Net
  • 3D convolutions
  • Video summarization
  • medical video processing
  • reinforcement learning
  • ultrasound

Fingerprint

Dive into the research topics of 'Video Summarization Through Reinforcement Learning with a 3D Spatiooral U-Net'. Together they form a unique fingerprint.

Cite this