HDW-SR: High-Frequency Guided Diffusion Model based on Wavelet Decomposition for Image Super-Resolution
- URL: http://arxiv.org/abs/2511.13175v1
- Date: Mon, 17 Nov 2025 09:25:26 GMT
- Title: HDW-SR: High-Frequency Guided Diffusion Model based on Wavelet Decomposition for Image Super-Resolution
- Authors: Chao Yang, Boqian Zhang, Jinghao Xu, Guang Jiang,
- Abstract summary: We propose a High-Frequency Guided Diffusion Network based on Wavelet Decomposition (HDW-SR)<n>We perform diffusion only on the residual map, allowing the network to focus more effectively on high-frequency information restoration.<n> Experiments on both synthetic and real-world datasets demonstrate that HDW-SR achieves competitive super-resolution performance.
- Score: 4.388490927225987
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion-based methods have shown great promise in single image super-resolution (SISR); however, existing approaches often produce blurred fine details due to insufficient guidance in the high-frequency domain. To address this issue, we propose a High-Frequency Guided Diffusion Network based on Wavelet Decomposition (HDW-SR), which replaces the conventional U-Net backbone in diffusion frameworks. Specifically, we perform diffusion only on the residual map, allowing the network to focus more effectively on high-frequency information restoration. We then introduce wavelet-based downsampling in place of standard CNN downsampling to achieve multi-scale frequency decomposition, enabling sparse cross-attention between the high-frequency subbands of the pre-super-resolved image and the low-frequency subbands of the diffused image for explicit high-frequency guidance. Moreover, a Dynamic Thresholding Block (DTB) is designed to refine high-frequency selection during the sparse attention process. During upsampling, the invertibility of the wavelet transform ensures low-loss feature reconstruction. Experiments on both synthetic and real-world datasets demonstrate that HDW-SR achieves competitive super-resolution performance, excelling particularly in recovering fine-grained image details. The code will be available after acceptance.
Related papers
- Diffusion Transformer meets Multi-level Wavelet Spectrum for Single Image Super-Resolution [15.056888813012451]
We propose a Diffusion Transformer model based on image Wavelet spectra for SR (DTWSR)<n>DTWSR incorporates the superiority of diffusion models and transformers to capture the interrelations among multiscale frequency sub-bands.<n>A dual-decoder is designed elaborately to handle the distinct variances in low-frequency and high-frequency sub-bands, without omitting their alignment in image generation.
arXiv Detail & Related papers (2025-11-03T02:56:58Z) - Freqformer: Image-Demoiréing Transformer via Efficient Frequency Decomposition [83.40450475728792]
We present Freqformer, a Transformer-based framework specifically designed for image demoir'eing through targeted frequency separation.<n>Our method performs an effective frequency decomposition that explicitly splits moir'e patterns into high-frequency spatially-localized textures and low-frequency scale-robust color distortions.<n>Experiments on various demoir'eing benchmarks demonstrate that Freqformer achieves state-of-the-art performance with a compact model size.
arXiv Detail & Related papers (2025-05-25T12:23:10Z) - SING: Semantic Image Communications using Null-Space and INN-Guided Diffusion Models [52.40011613324083]
Joint source-channel coding systems (DeepJSCC) have recently demonstrated remarkable performance in wireless image transmission.<n>Existing methods focus on minimizing distortion between the transmitted image and the reconstructed version at the receiver, often overlooking perceptual quality.<n>We propose SING, a novel framework that formulates the recovery of high-quality images from corrupted reconstructions as an inverse problem.
arXiv Detail & Related papers (2025-03-16T12:32:11Z) - Local Implicit Wavelet Transformer for Arbitrary-Scale Super-Resolution [15.610136214020947]
Implicit neural representations have recently demonstrated promising potential in arbitrary-scale Super-Resolution (SR) of images.
Most existing methods predict the pixel in the SR image based on the queried coordinate and ensemble nearby features.
We propose the Local Implicit Wavelet Transformer (LIWT) to enhance the restoration of high-frequency texture details.
arXiv Detail & Related papers (2024-11-10T12:21:14Z) - FreqINR: Frequency Consistency for Implicit Neural Representation with Adaptive DCT Frequency Loss [5.349799154834945]
This paper introduces Frequency Consistency for Implicit Neural Representation (FreqINR), an innovative Arbitrary-scale Super-resolution method.
During training, we employ Adaptive Discrete Cosine Transform Frequency Loss (ADFL) to minimize the frequency gap between HR and ground-truth images.
During inference, we extend the receptive field to preserve spectral coherence between low-resolution (LR) and ground-truth images.
arXiv Detail & Related papers (2024-08-25T03:53:17Z) - Frequency-Domain Refinement with Multiscale Diffusion for Super Resolution [19.327571569959062]
We propose a novel Frequency Domain-guided multiscale Diffusion model (FDDiff)<n>FDDiff decomposes the high-frequency information complementing process into finer-grained steps.<n>It outperforms prior generative methods with higher-fidelity super-resolution results.
arXiv Detail & Related papers (2024-05-16T11:58:52Z) - WaveNeRF: Wavelet-based Generalizable Neural Radiance Fields [149.2296890464997]
We design WaveNeRF, which integrates wavelet frequency decomposition into MVS and NeRF.
WaveNeRF achieves superior generalizable radiance field modeling when only given three images as input.
arXiv Detail & Related papers (2023-08-09T09:24:56Z) - Low-Light Image Enhancement with Wavelet-based Diffusion Models [50.632343822790006]
Diffusion models have achieved promising results in image restoration tasks, yet suffer from time-consuming, excessive computational resource consumption, and unstable restoration.
We propose a robust and efficient Diffusion-based Low-Light image enhancement approach, dubbed DiffLL.
arXiv Detail & Related papers (2023-06-01T03:08:28Z) - FreqNet: A Frequency-domain Image Super-Resolution Network with Dicrete
Cosine Transform [16.439669339293747]
Single image super-resolution(SISR) is an ill-posed problem that aims to obtain high-resolution (HR) output from low-resolution (LR) input.
Despite the high peak signal-to-noise ratios(PSNR) results, it is difficult to determine whether the model correctly adds desired high-frequency details.
We propose FreqNet, an intuitive pipeline from the frequency domain perspective, to solve this problem.
arXiv Detail & Related papers (2021-11-21T11:49:12Z) - Wavelet-Based Network For High Dynamic Range Imaging [64.66969585951207]
Existing methods, such as optical flow based and end-to-end deep learning based solutions, are error-prone either in detail restoration or ghosting artifacts removal.
In this work, we propose a novel frequency-guided end-to-end deep neural network (FNet) to conduct HDR fusion in the frequency domain, and Wavelet Transform (DWT) is used to decompose inputs into different frequency bands.
The low-frequency signals are used to avoid specific ghosting artifacts, while the high-frequency signals are used for preserving details.
arXiv Detail & Related papers (2021-08-03T12:26:33Z) - Frequency Consistent Adaptation for Real World Super Resolution [64.91914552787668]
We propose a novel Frequency Consistent Adaptation (FCA) that ensures the frequency domain consistency when applying Super-Resolution (SR) methods to the real scene.
We estimate degradation kernels from unsupervised images and generate the corresponding Low-Resolution (LR) images.
Based on the domain-consistent LR-HR pairs, we train easy-implemented Convolutional Neural Network (CNN) SR models.
arXiv Detail & Related papers (2020-12-18T08:25:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.