Spatiotemporal Variance-Guided Filtering for Motion Blur

Max Oberberger, Matthäus G. Chajdas, Rüdiger Westermann

Publikation: Beitrag in FachzeitschriftArtikelBegutachtung

1 Zitat (Scopus)

Abstract

Adding motion blur to a scene can help to convey the feeling of speed even at low frame rates. Monte Carlo ray tracing can compute accurate motion blur, but requires a large number of samples per pixel to converge. In comparison, rasterization, in combination with a post-processing filter, can generate fast, but not accurate motion blur from a single sample per pixel. We build upon a recent path tracing denoiser and propose its variant to simulate ray-traced motion blur, enabling fast and high-quality motion blur from a single sample per pixel. Our approach creates temporally coherent renderings by estimating the motion direction and variance locally, and using these estimates to guide wavelet filters at different scales. We compare image quality against brute force Monte Carlo methods and current post-processing motion blur. Our approach achieves real-time frame rates, requiring less than 4ms for full-screen motion blur at a resolution of 1920 x 1080 on recent graphics cards.

OriginalspracheEnglisch
Aufsatznummer22
FachzeitschriftProceedings of the ACM on Computer Graphics and Interactive Techniques
Jahrgang5
Ausgabenummer3
DOIs
PublikationsstatusVeröffentlicht - Juli 2022

Fingerprint

Untersuchen Sie die Forschungsthemen von „Spatiotemporal Variance-Guided Filtering for Motion Blur“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren