Parameter Efficient Adaptation for Image Restoration with Heterogeneous Mixture-of-Experts
- URL: http://arxiv.org/abs/2312.08881v2
- Date: Sat, 19 Oct 2024 03:58:58 GMT
- Title: Parameter Efficient Adaptation for Image Restoration with Heterogeneous Mixture-of-Experts
- Authors: Hang Guo, Tao Dai, Yuanchao Bai, Bin Chen, Xudong Ren, Zexuan Zhu, Shu-Tao Xia,
- Abstract summary: We introduce an alternative solution to improve the generalization of image restoration models.
We propose AdaptIR, a Mixture-of-Experts (MoE) with multi-branch design to capture local, global, and channel representation bases.
Our AdaptIR achieves stable performance on single-degradation tasks, and excels in hybrid-degradation tasks, with fine-tuning only 0.6% parameters for 8 hours.
- Score: 52.39959535724677
- License:
- Abstract: Designing single-task image restoration models for specific degradation has seen great success in recent years. To achieve generalized image restoration, all-in-one methods have recently been proposed and shown potential for multiple restoration tasks using one single model. Despite the promising results, the existing all-in-one paradigm still suffers from high computational costs as well as limited generalization on unseen degradations. In this work, we introduce an alternative solution to improve the generalization of image restoration models. Drawing inspiration from recent advancements in Parameter Efficient Transfer Learning (PETL), we aim to tune only a small number of parameters to adapt pre-trained restoration models to various tasks. However, current PETL methods fail to generalize across varied restoration tasks due to their homogeneous representation nature. To this end, we propose AdaptIR, a Mixture-of-Experts (MoE) with orthogonal multi-branch design to capture local spatial, global spatial, and channel representation bases, followed by adaptive base combination to obtain heterogeneous representation for different degradations. Extensive experiments demonstrate that our AdaptIR achieves stable performance on single-degradation tasks, and excels in hybrid-degradation tasks, with fine-tuning only 0.6% parameters for 8 hours.
Related papers
- Mixed Degradation Image Restoration via Local Dynamic Optimization and Conditional Embedding [67.57487747508179]
Multiple-in-one image restoration (IR) has made significant progress, aiming to handle all types of single degraded image restoration with a single model.
In this paper, we propose a novel multiple-in-one IR model that can effectively restore images with both single and mixed degradations.
arXiv Detail & Related papers (2024-11-25T09:26:34Z) - Chain-of-Restoration: Multi-Task Image Restoration Models are Zero-Shot Step-by-Step Universal Image Restorers [53.298698981438]
We propose Universal Image Restoration (UIR), a new task setting that requires models to be trained on a set of degradation bases and then remove any degradation that these bases can potentially compose in a zero-shot manner.
Inspired by the Chain-of-Thought which prompts LLMs to address problems step-by-step, we propose the Chain-of-Restoration (CoR)
CoR instructs models to step-by-step remove unknown composite degradations.
arXiv Detail & Related papers (2024-10-11T10:21:42Z) - UIR-LoRA: Achieving Universal Image Restoration through Multiple Low-Rank Adaptation [50.27688690379488]
Existing unified methods treat multi-degradation image restoration as a multi-task learning problem.
We propose a universal image restoration framework based on multiple low-rank adapters (LoRA) from multi-domain transfer learning.
Our framework leverages the pre-trained generative model as the shared component for multi-degradation restoration and transfers it to specific degradation image restoration tasks.
arXiv Detail & Related papers (2024-09-30T11:16:56Z) - Taming Generative Diffusion Prior for Universal Blind Image Restoration [4.106012295148947]
BIR-D is able to fulfill multi-guidance blind image restoration.
It can also restore images that undergo multiple and complicated degradations, demonstrating the practical applications.
arXiv Detail & Related papers (2024-08-21T02:19:54Z) - HAIR: Hypernetworks-based All-in-One Image Restoration [46.681872835394095]
Hair is a Hypernetworks-based All-in-One Image Restoration plug-and-play method.
It generates parameters based on the input image and thus makes the model to adapt to specific degradation dynamically.
It can significantly improve the performance of existing image restoration models in a plug-and-play manner, both in single-task and All-in-One settings.
arXiv Detail & Related papers (2024-08-15T11:34:33Z) - Any Image Restoration with Efficient Automatic Degradation Adaptation [132.81912195537433]
We propose a unified manner to achieve joint embedding by leveraging the inherent similarities across various degradations for efficient and comprehensive restoration.
Our network sets new SOTA records while reducing model complexity by approximately -82% in trainable parameters and -85% in FLOPs.
arXiv Detail & Related papers (2024-07-18T10:26:53Z) - PGDiff: Guiding Diffusion Models for Versatile Face Restoration via
Partial Guidance [65.5618804029422]
Previous works have achieved noteworthy success by limiting the solution space using explicit degradation models.
We propose PGDiff by introducing partial guidance, a fresh perspective that is more adaptable to real-world degradations.
Our method not only outperforms existing diffusion-prior-based approaches but also competes favorably with task-specific models.
arXiv Detail & Related papers (2023-09-19T17:51:33Z) - One Size Fits All: Hypernetwork for Tunable Image Restoration [5.33024001730262]
We introduce a novel approach for tunable image restoration that achieves the accuracy of multiple models, each optimized for a different level of degradation.
Our model can be optimized to restore as many degradation levels as required with a constant number of parameters and for various image restoration tasks.
arXiv Detail & Related papers (2022-06-13T08:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.