Spatiotemporal Variance-Guided Filtering for Motion Blur

Max Oberberger, Matthäus G. Chajdas, Rüdiger Westermann

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Adding motion blur to a scene can help to convey the feeling of speed even at low frame rates. Monte Carlo ray tracing can compute accurate motion blur, but requires a large number of samples per pixel to converge. In comparison, rasterization, in combination with a post-processing filter, can generate fast, but not accurate motion blur from a single sample per pixel. We build upon a recent path tracing denoiser and propose its variant to simulate ray-traced motion blur, enabling fast and high-quality motion blur from a single sample per pixel. Our approach creates temporally coherent renderings by estimating the motion direction and variance locally, and using these estimates to guide wavelet filters at different scales. We compare image quality against brute force Monte Carlo methods and current post-processing motion blur. Our approach achieves real-time frame rates, requiring less than 4ms for full-screen motion blur at a resolution of 1920 x 1080 on recent graphics cards.

Original languageEnglish
Article number22
JournalProceedings of the ACM on Computer Graphics and Interactive Techniques
Volume5
Issue number3
DOIs
StatePublished - Jul 2022

Keywords

  • motion blur
  • ray tracing
  • real-time rendering
  • reconstruction

Fingerprint

Dive into the research topics of 'Spatiotemporal Variance-Guided Filtering for Motion Blur'. Together they form a unique fingerprint.

Cite this