LoRA-IR: Taming Low-Rank Experts for Efficient All-in-One Image Restoration
- URL: http://arxiv.org/abs/2410.15385v1
- Date: Sun, 20 Oct 2024 13:00:24 GMT
- Title: LoRA-IR: Taming Low-Rank Experts for Efficient All-in-One Image Restoration
- Authors: Yuang Ai, Huaibo Huang, Ran He,
- Abstract summary: We propose LoRA-IR, a flexible framework that dynamically leverages compact low-rank experts to facilitate efficient all-in-one image restoration.
LoRA-IR consists of two training stages: degradation-guided pre-training and parameter-efficient fine-tuning.
LoRA-IR achieves state-of-the-art performance across 14 image restoration tasks and 29 benchmarks.
- Score: 62.3751291442432
- License:
- Abstract: Prompt-based all-in-one image restoration (IR) frameworks have achieved remarkable performance by incorporating degradation-specific information into prompt modules. Nevertheless, handling the complex and diverse degradations encountered in real-world scenarios remains a significant challenge. To address this challenge, we propose LoRA-IR, a flexible framework that dynamically leverages compact low-rank experts to facilitate efficient all-in-one image restoration. Specifically, LoRA-IR consists of two training stages: degradation-guided pre-training and parameter-efficient fine-tuning. In the pre-training stage, we enhance the pre-trained CLIP model by introducing a simple mechanism that scales it to higher resolutions, allowing us to extract robust degradation representations that adaptively guide the IR network. In the fine-tuning stage, we refine the pre-trained IR network using low-rank adaptation (LoRA). Built upon a Mixture-of-Experts (MoE) architecture, LoRA-IR dynamically integrates multiple low-rank restoration experts through a degradation-guided router. This dynamic integration mechanism significantly enhances our model's adaptability to diverse and unknown degradations in complex real-world scenarios. Extensive experiments demonstrate that LoRA-IR achieves state-of-the-art performance across 14 image restoration tasks and 29 benchmarks. Code and pre-trained models will be available at: https://github.com/shallowdream204/LoRA-IR.
Related papers
- UIR-LoRA: Achieving Universal Image Restoration through Multiple Low-Rank Adaptation [50.27688690379488]
Existing unified methods treat multi-degradation image restoration as a multi-task learning problem.
We propose a universal image restoration framework based on multiple low-rank adapters (LoRA) from multi-domain transfer learning.
Our framework leverages the pre-trained generative model as the shared component for multi-degradation restoration and transfers it to specific degradation image restoration tasks.
arXiv Detail & Related papers (2024-09-30T11:16:56Z) - AdaIR: Exploiting Underlying Similarities of Image Restoration Tasks with Adapters [57.62742271140852]
AdaIR is a novel framework that enables low storage cost and efficient training without sacrificing performance.
AdaIR requires solely the training of lightweight, task-specific modules, ensuring a more efficient storage and training regimen.
arXiv Detail & Related papers (2024-04-17T15:31:06Z) - Low-Res Leads the Way: Improving Generalization for Super-Resolution by
Self-Supervised Learning [45.13580581290495]
This work introduces a novel "Low-Res Leads the Way" (LWay) training framework to enhance the adaptability of SR models to real-world images.
Our approach utilizes a low-resolution (LR) reconstruction network to extract degradation embeddings from LR images, merging them with super-resolved outputs for LR reconstruction.
Our training regime is universally compatible, requiring no network architecture modifications, making it a practical solution for real-world SR applications.
arXiv Detail & Related papers (2024-03-05T02:29:18Z) - LIR: A Lightweight Baseline for Image Restoration [4.187190284830909]
The inherent characteristics of the Image Restoration task are often overlooked in many works.
We propose a Lightweight Baseline network for Image Restoration called LIR to efficiently restore the image and remove degradations.
Our LIR achieves the state-of-the-art Structure Similarity Index Measure (SSIM) and comparable performance to state-of-the-art models on Peak Signal-to-Noise Ratio (PSNR)
arXiv Detail & Related papers (2024-02-02T12:39:47Z) - Multimodal Prompt Perceiver: Empower Adaptiveness, Generalizability and Fidelity for All-in-One Image Restoration [58.11518043688793]
MPerceiver is a novel approach to enhance adaptiveness, generalizability and fidelity for all-in-one image restoration.
MPerceiver is trained on 9 tasks for all-in-one IR and outperforms state-of-the-art task-specific methods across most tasks.
arXiv Detail & Related papers (2023-12-05T17:47:11Z) - ICF-SRSR: Invertible scale-Conditional Function for Self-Supervised
Real-world Single Image Super-Resolution [60.90817228730133]
Single image super-resolution (SISR) is a challenging problem that aims to up-sample a given low-resolution (LR) image to a high-resolution (HR) counterpart.
Recent approaches are trained on simulated LR images degraded by simplified down-sampling operators.
We propose a novel Invertible scale-Conditional Function (ICF) which can scale an input image and then restore the original input with different scale conditions.
arXiv Detail & Related papers (2023-07-24T12:42:45Z) - DRM-IR: Task-Adaptive Deep Unfolding Network for All-In-One Image
Restoration [5.573836220587265]
This work proposes an efficient Dynamic Reference Modeling paradigm (DRM-IR)
DRM-IR consists of task-adaptive degradation modeling and model-based image restoring.
Experiments on multiple benchmark datasets show that our DRM-IR achieves state-of-the-art in All-In-One IR.
arXiv Detail & Related papers (2023-07-15T02:42:19Z) - RBSR: Efficient and Flexible Recurrent Network for Burst
Super-Resolution [57.98314517861539]
Burst super-resolution (BurstSR) aims at reconstructing a high-resolution (HR) image from a sequence of low-resolution (LR) and noisy images.
In this paper, we suggest fusing cues frame-by-frame with an efficient and flexible recurrent network.
arXiv Detail & Related papers (2023-06-30T12:14:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.