ClusIR: Towards Cluster-Guided All-in-One Image Restoration
- URL: http://arxiv.org/abs/2512.10948v1
- Date: Thu, 11 Dec 2025 18:59:47 GMT
- Title: ClusIR: Towards Cluster-Guided All-in-One Image Restoration
- Authors: Shengkai Hu, Jiaqi Ma, Jun Wan, Wenwen Min, Yongcheng Jing, Lefei Zhang, Dacheng Tao,
- Abstract summary: ClusIR aims to recover high-quality images from diverse degradations within a unified framework.<n>ClusIR comprises two key components: a Probabilistic Cluster-Guided Routing Mechanism (PCGRM) and a Degradation-Aware Frequency Modulation Module (DAFMM)
- Score: 72.16989784735796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: All-in-One Image Restoration (AiOIR) aims to recover high-quality images from diverse degradations within a unified framework. However, existing methods often fail to explicitly model degradation types and struggle to adapt their restoration behavior to complex or mixed degradations. To address these issues, we propose ClusIR, a Cluster-Guided Image Restoration framework that explicitly models degradation semantics through learnable clustering and propagates cluster-aware cues across spatial and frequency domains for adaptive restoration. Specifically, ClusIR comprises two key components: a Probabilistic Cluster-Guided Routing Mechanism (PCGRM) and a Degradation-Aware Frequency Modulation Module (DAFMM). The proposed PCGRM disentangles degradation recognition from expert activation, enabling discriminative degradation perception and stable expert routing. Meanwhile, DAFMM leverages the cluster-guided priors to perform adaptive frequency decomposition and targeted modulation, collaboratively refining structural and textural representations for higher restoration fidelity. The cluster-guided synergy seamlessly bridges semantic cues with frequency-domain modulation, empowering ClusIR to attain remarkable restoration results across a wide range of degradations. Extensive experiments on diverse benchmarks validate that ClusIR reaches competitive performance under several scenarios.
Related papers
- Towards Any-Quality Image Segmentation via Generative and Adaptive Latent Space Enhancement [27.566673104431725]
Segment Anything Models (SAMs) are known for their exceptional zero-shot segmentation performance.<n>However, their performance drops significantly on severely degraded, low-quality images, limiting their effectiveness in real-world scenarios.<n>We propose GleSAM++, which utilizes Generative Latent space Enhancement to boost robustness on low-quality images.
arXiv Detail & Related papers (2026-01-05T11:28:58Z) - Mixture of Ranks with Degradation-Aware Routing for One-Step Real-World Image Super-Resolution [76.66229730098759]
In real-world image super-resolution (Real-ISR), existing approaches mainly rely on fine-tuning pre-trained diffusion models.<n>We propose a Mixture-of-Ranks (MoR) architecture for single-step image super-resolution.<n>We introduce a fine-grained expert partitioning strategy that treats each rank in LoRA as an independent expert.
arXiv Detail & Related papers (2025-11-20T04:11:44Z) - Learning to Restore Multi-Degraded Images via Ingredient Decoupling and Task-Aware Path Adaptation [51.10017611491389]
Real-world images often suffer from multiple coexisting degradations, such as rain, noise, and haze coexisting in a single image.<n>We propose an adaptive multi-degradation image restoration network that reconstructs images by leveraging decoupled representations of degradation ingredients.<n>The resulting tightly integrated architecture, termed IMDNet, is extensively validated through experiments.
arXiv Detail & Related papers (2025-11-07T01:50:36Z) - GENRE-CMR: Generalizable Deep Learning for Diverse Multi-Domain Cardiac MRI Reconstruction [0.8749675983608171]
We propose GENRE-CMR, a generative adversarial network (GAN)-based architecture to enhance reconstruction fidelity and generalization.<n>Experiments confirm that GENRE-CMR surpasses state-of-the-art methods on training and unseen data, achieving 0.9552 SSIM and 38.90 dB PSNR on unseen distributions.<n>Our framework presents a unified and robust solution for high-quality CMR reconstruction, paving the way for clinically adaptable deployment across heterogeneous acquisition protocols.
arXiv Detail & Related papers (2025-08-28T09:43:59Z) - UniRes: Universal Image Restoration for Complex Degradations [53.74404005987783]
Real-world image restoration is hampered by diverse degradations stemming from varying capture conditions, capture devices and post-processing pipelines.<n>A simple yet flexible diffusionbased framework, named UniRes, is proposed to address such degradations in an end-to-end manner.<n>Our proposed method is evaluated on both complex-degradation and single-degradation image restoration datasets.
arXiv Detail & Related papers (2025-06-05T21:25:39Z) - Beyond Degradation Redundancy: Contrastive Prompt Learning for All-in-One Image Restoration [109.38288333994407]
Contrastive Prompt Learning (CPL) is a novel framework that fundamentally enhances prompt-task alignment.<n>Our framework establishes new state-of-the-art performance while maintaining parameter efficiency, offering a principled solution for unified image restoration.
arXiv Detail & Related papers (2025-04-14T08:24:57Z) - Mixed Degradation Image Restoration via Local Dynamic Optimization and Conditional Embedding [67.57487747508179]
Multiple-in-one image restoration (IR) has made significant progress, aiming to handle all types of single degraded image restoration with a single model.
In this paper, we propose a novel multiple-in-one IR model that can effectively restore images with both single and mixed degradations.
arXiv Detail & Related papers (2024-11-25T09:26:34Z) - GRIDS: Grouped Multiple-Degradation Restoration with Image Degradation Similarity [35.11349385659554]
Grouped Restoration with Image Degradation Similarity (GRIDS) is a novel approach that harmonizes the competing objectives inherent in multiple-degradation restoration.
Based on the degradation similarity, GRIDS divides restoration tasks into one of the optimal groups, where tasks within the same group are highly correlated.
Trained models within each group show significant improvements, with an average improvement of 0.09dB over single-task upper bound models.
arXiv Detail & Related papers (2024-07-17T02:43:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.