QDM: Quadtree-Based Region-Adaptive Sparse Diffusion Models for Efficient Image Super-Resolution
- URL: http://arxiv.org/abs/2503.12015v1
- Date: Sat, 15 Mar 2025 06:50:30 GMT
- Title: QDM: Quadtree-Based Region-Adaptive Sparse Diffusion Models for Efficient Image Super-Resolution
- Authors: Donglin Yang, Paul Vicol, Xiaojuan Qi, Renjie Liao, Xiaofan Zhang,
- Abstract summary: We propose the Quadtree Diffusion Model (QDM), a region-adaptive diffusion framework.<n>By guiding the diffusion with a quadtree derived from the low-quality input, QDM identifies key regions-represented by leaf nodes-where fine detail is essential.<n> Experiments demonstrate QDM's effectiveness in high-resolution SR tasks across diverse image types, particularly in medical imaging.
- Score: 54.67891514843853
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning-based super-resolution (SR) methods often perform pixel-wise computations uniformly across entire images, even in homogeneous regions where high-resolution refinement is redundant. We propose the Quadtree Diffusion Model (QDM), a region-adaptive diffusion framework that leverages a quadtree structure to selectively enhance detail-rich regions while reducing computations in homogeneous areas. By guiding the diffusion with a quadtree derived from the low-quality input, QDM identifies key regions-represented by leaf nodes-where fine detail is essential and applies minimal refinement elsewhere. This mask-guided, two-stream architecture adaptively balances quality and efficiency, producing high-fidelity outputs with low computational redundancy. Experiments demonstrate QDM's effectiveness in high-resolution SR tasks across diverse image types, particularly in medical imaging (e.g., CT scans), where large homogeneous regions are prevalent. Furthermore, QDM outperforms or is comparable to state-of-the-art SR methods on standard benchmarks while significantly reducing computational costs, highlighting its efficiency and suitability for resource-limited environments. Our code is available at https://github.com/linYDTHU/QDM.
Related papers
- Efficient Model Agnostic Approach for Implicit Neural Representation
Based Arbitrary-Scale Image Super-Resolution [5.704360536038803]
Single image super-resolution (SISR) has experienced significant advancements, primarily driven by deep convolutional networks.
Traditional networks are limited to upscaling images to a fixed scale, leading to the utilization of implicit neural functions for generating arbitrarily scaled images.
We introduce a novel and efficient framework, the Mixture of Experts Implicit Super-Resolution (MoEISR), which enables super-resolution at arbitrary scales.
arXiv Detail & Related papers (2023-11-20T05:34:36Z) - Multi-Depth Branch Network for Efficient Image Super-Resolution [12.042706918188566]
A longstanding challenge in Super-Resolution (SR) is how to efficiently enhance high-frequency details in Low-Resolution (LR) images.
We propose an asymmetric SR architecture featuring Multi-Depth Branch Module (MDBM)
MDBMs contain branches of different depths, designed to capture high- and low-frequency information simultaneously and efficiently.
arXiv Detail & Related papers (2023-09-29T15:46:25Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - CADyQ: Content-Aware Dynamic Quantization for Image Super-Resolution [55.50793823060282]
We propose a novel Content-Aware Dynamic Quantization (CADyQ) method for image super-resolution (SR) networks.
CADyQ allocates optimal bits to local regions and layers adaptively based on the local contents of an input image.
The pipeline has been tested on various SR networks and evaluated on several standard benchmarks.
arXiv Detail & Related papers (2022-07-21T07:50:50Z) - Deep Posterior Distribution-based Embedding for Hyperspectral Image
Super-resolution [75.24345439401166]
This paper focuses on how to embed the high-dimensional spatial-spectral information of hyperspectral (HS) images efficiently and effectively.
We formulate HS embedding as an approximation of the posterior distribution of a set of carefully-defined HS embedding events.
Then, we incorporate the proposed feature embedding scheme into a source-consistent super-resolution framework that is physically-interpretable.
Experiments over three common benchmark datasets demonstrate that PDE-Net achieves superior performance over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-30T06:59:01Z) - Model Inspired Autoencoder for Unsupervised Hyperspectral Image
Super-Resolution [25.878793557013207]
This paper focuses on hyperspectral image (HSI) super-resolution that aims to fuse a low-spatial-resolution HSI and a high-spatial-resolution multispectral image.
Existing deep learning-based approaches are mostly supervised that rely on a large number of labeled training samples.
We make the first attempt to design a model inspired deep network for HSI super-resolution in an unsupervised manner.
arXiv Detail & Related papers (2021-10-22T05:15:16Z) - Hierarchical Conditional Flow: A Unified Framework for Image
Super-Resolution and Image Rescaling [139.25215100378284]
We propose a hierarchical conditional flow (HCFlow) as a unified framework for image SR and image rescaling.
HCFlow learns a mapping between HR and LR image pairs by modelling the distribution of the LR image and the rest high-frequency component simultaneously.
To further enhance the performance, other losses such as perceptual loss and GAN loss are combined with the commonly used negative log-likelihood loss in training.
arXiv Detail & Related papers (2021-08-11T16:11:01Z) - Fully Quantized Image Super-Resolution Networks [81.75002888152159]
We propose a Fully Quantized image Super-Resolution framework (FQSR) to jointly optimize efficiency and accuracy.
We apply our quantization scheme on multiple mainstream super-resolution architectures, including SRResNet, SRGAN and EDSR.
Our FQSR using low bits quantization can achieve on par performance compared with the full-precision counterparts on five benchmark datasets.
arXiv Detail & Related papers (2020-11-29T03:53:49Z) - Hyperspectral Image Super-resolution via Deep Spatio-spectral
Convolutional Neural Networks [32.10057746890683]
We propose a simple and efficient architecture for deep convolutional neural networks to fuse a low-resolution hyperspectral image and a high-resolution multispectral image.
The proposed network architecture achieves best performance compared with recent state-of-the-art hyperspectral image super-resolution approaches.
arXiv Detail & Related papers (2020-05-29T05:56:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.