Frequency-Domain Refinement with Multiscale Diffusion for Super Resolution

Xingjian Wang, Li Chai, Jiming Chen

公開日: 2024/5/16

Abstract

The performance of single image super-resolution depends heavily on how to generate and complement high-frequency details to low-resolution images. Recently, diffusion-based DDPM models exhibit great potential in generating high-quality details for super-resolution tasks. They tend to directly predict high-frequency information of wide bandwidth by solely utilizing the high-resolution ground truth as the target for all sampling timesteps. However, as a result, they encounter hallucination problem that they generate mismatching artifacts. To tackle this problem and achieve higher-quality super-resolution, we propose a novel Frequency Domain-guided multiscale Diffusion model (FDDiff), which decomposes the high-frequency information complementing process into finer-grained steps. In particular, a wavelet packet-based frequency degradation pyramid is developed to provide multiscale intermediate targets with increasing bandwidth. Based on these targets, FDDiff guides reverse diffusion process to progressively complement missing high-frequency details over timesteps. Moreover, a multiscale frequency refinement network is designed to predict the required high-frequency components at multiple scales within one unified network. Comprehensive evaluations on popular benchmarks are conducted, and demonstrate that FDDiff outperforms prior generative methods with higher-fidelity super-resolution results.

Frequency-Domain Refinement with Multiscale Diffusion for Super Resolution | SummarXiv | SummarXiv