Wavelet-Space Representations for Neural Super-Resolution in Rendering Pipelines

Prateek Poudel, Prashant Aryal, Kirtan Kunwar, Navin Nepal, Dinesh Baniya Kshatri

公開日: 2025/8/22

Abstract

We investigate the use of wavelet-space feature decomposition in neural super-resolution for rendering pipelines. Building on recent neural upscaling frameworks, we introduce a formulation that predicts stationary wavelet coefficients rather than directly regressing RGB values. This frequency-aware decomposition separates low- and high-frequency components, enabling sharper texture recovery and reducing blur in challenging regions. Unlike conventional wavelet transforms, our use of the stationary wavelet transform (SWT) preserves spatial alignment across subbands, allowing the network to integrate G-buffer attributes and temporally warped history frames in a shift-invariant manner. The predicted coefficients are recombined through inverse wavelet synthesis, producing resolution-consistent reconstructions across arbitrary scale factors. We conduct extensive evaluations and ablations, showing that incorporating SWT improves both fidelity and perceptual quality with only modest overhead, while remaining compatible with standard rendering architectures. Taken together, our results suggest that wavelet-domain neural super-resolution provides a principled and efficient path toward higher-quality real-time rendering, with broader implications for neural rendering and graphics applications.

Wavelet-Space Representations for Neural Super-Resolution in Rendering Pipelines | SummarXiv | SummarXiv