AdaptIR: Parameter Efficient Multi-task Adaptation for Pre-trained Image
Restoration Models
- URL: http://arxiv.org/abs/2312.08881v1
- Date: Tue, 12 Dec 2023 14:27:59 GMT
- Title: AdaptIR: Parameter Efficient Multi-task Adaptation for Pre-trained Image
Restoration Models
- Authors: Hang Guo, Tao Dai, Yuanchao Bai, Bin Chen, Shu-Tao Xia, Zexuan Zhu
- Abstract summary: We propose AdaptIR, a novel parameter efficient transfer learning method for adapting pre-trained restoration models.
Experiments demonstrate that the proposed method can achieve comparable or even better performance than full fine-tuning, while only using 0.6%.
- Score: 58.10797482129863
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Pre-training has shown promising results on various image restoration tasks,
which is usually followed by full fine-tuning for each specific downstream task
(e.g., image denoising). However, such full fine-tuning usually suffers from
the problems of heavy computational cost in practice, due to the massive
parameters of pre-trained restoration models, thus limiting its real-world
applications. Recently, Parameter Efficient Transfer Learning (PETL) offers an
efficient alternative solution to full fine-tuning, yet still faces great
challenges for pre-trained image restoration models, due to the diversity of
different degradations. To address these issues, we propose AdaptIR, a novel
parameter efficient transfer learning method for adapting pre-trained
restoration models. Specifically, the proposed method consists of a
multi-branch inception structure to orthogonally capture local spatial, global
spatial, and channel interactions. In this way, it allows powerful
representations under a very low parameter budget. Extensive experiments
demonstrate that the proposed method can achieve comparable or even better
performance than full fine-tuning, while only using 0.6% parameters. Code is
available at https://github.com/csguoh/AdaptIR.
Related papers
- Gradient Projection For Continual Parameter-Efficient Tuning [42.800411328615894]
We reformulate Adapter, LoRA, Prefix-tuning, and Prompt-tuning from the perspective of gradient projection.
We show that the condition for the gradient can effectively resist forgetting even for large-scale models.
We extensively evaluate our method with different backbones, including ViT and CLIP, on diverse datasets.
arXiv Detail & Related papers (2024-05-22T06:33:48Z) - Time-, Memory- and Parameter-Efficient Visual Adaptation [75.28557015773217]
We propose an adaptation method which does not backpropagate gradients through the backbone.
We achieve this by designing a lightweight network in parallel that operates on features from the frozen, pretrained backbone.
arXiv Detail & Related papers (2024-02-05T10:55:47Z) - Efficient Adaptation of Large Vision Transformer via Adapter
Re-Composing [8.88477151877883]
High-capacity pre-trained models have revolutionized problem-solving in computer vision.
We propose a novel Adapter Re-Composing (ARC) strategy that addresses efficient pre-trained model adaptation.
Our approach considers the reusability of adaptation parameters and introduces a parameter-sharing scheme.
arXiv Detail & Related papers (2023-10-10T01:04:15Z) - Evaluating Parameter-Efficient Transfer Learning Approaches on SURE
Benchmark for Speech Understanding [40.27182770995891]
Fine-tuning is widely used as the default algorithm for transfer learning from pre-trained models.
We introduce the Speech UndeRstanding Evaluation (SURE) benchmark for parameter-efficient learning for various speech-processing tasks.
arXiv Detail & Related papers (2023-03-02T08:57:33Z) - Parameter-Efficient Image-to-Video Transfer Learning [66.82811235484607]
Large pre-trained models for various downstream tasks of interest have recently emerged with promising performance.
Due to the ever-growing model size, the standard full fine-tuning based task adaptation strategy becomes costly in terms of model training and storage.
We propose a new Spatio-Adapter for parameter-efficient fine-tuning per video task.
arXiv Detail & Related papers (2022-06-27T18:02:29Z) - Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than
In-Context Learning [81.3514358542452]
Few-shot in-context learning (ICL) incurs substantial computational, memory, and storage costs because it involves processing all of the training examples every time a prediction is made.
parameter-efficient fine-tuning offers an alternative paradigm where a small set of parameters are trained to enable a model to perform the new task.
In this paper, we rigorously compare few-shot ICL and parameter-efficient fine-tuning and demonstrate that the latter offers better accuracy as well as dramatically lower computational costs.
arXiv Detail & Related papers (2022-05-11T17:10:41Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z) - Parameter-Efficient Transfer from Sequential Behaviors for User Modeling
and Recommendation [111.44445634272235]
In this paper, we develop a parameter efficient transfer learning architecture, termed as PeterRec.
PeterRec allows the pre-trained parameters to remain unaltered during fine-tuning by injecting a series of re-learned neural networks.
We perform extensive experimental ablation to show the effectiveness of the learned user representation in five downstream tasks.
arXiv Detail & Related papers (2020-01-13T14:09:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.