Parameter-Efficient Image-to-Video Transfer Learning
- URL: http://arxiv.org/abs/2206.13559v1
- Date: Mon, 27 Jun 2022 18:02:29 GMT
- Title: Parameter-Efficient Image-to-Video Transfer Learning
- Authors: Junting Pan, Ziyi Lin, Xiatian Zhu, Jing Shao, Hongsheng Li
- Abstract summary: Large pre-trained models for various downstream tasks of interest have recently emerged with promising performance.
Due to the ever-growing model size, the standard full fine-tuning based task adaptation strategy becomes costly in terms of model training and storage.
We propose a new Spatio-Adapter for parameter-efficient fine-tuning per video task.
- Score: 66.82811235484607
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Capitalizing on large pre-trained models for various downstream tasks of
interest have recently emerged with promising performance. Due to the
ever-growing model size, the standard full fine-tuning based task adaptation
strategy becomes prohibitively costly in terms of model training and storage.
This has led to a new research direction in parameter-efficient transfer
learning. However, existing attempts typically focus on downstream tasks from
the same modality (e.g., image understanding) of the pre-trained model. This
creates a limit because in some specific modalities, (e.g., video
understanding) such a strong pre-trained model with sufficient knowledge is
less or not available. In this work, we investigate such a novel cross-modality
transfer learning setting, namely parameter-efficient image-to-video transfer
learning. To solve this problem, we propose a new Spatio-Temporal Adapter
(ST-Adapter) for parameter-efficient fine-tuning per video task. With a
built-in spatio-temporal reasoning capability in a compact design, ST-Adapter
enables a pre-trained image model without temporal knowledge to reason about
dynamic video content at a small (~8%) per-task parameter cost, requiring
approximately 20 times fewer updated parameters compared to previous work.
Extensive experiments on video action recognition tasks show that our
ST-Adapter can match or even outperform the strong full fine-tuning strategy
and state-of-the-art video models, whilst enjoying the advantage of parameter
efficiency.
Related papers
- Parameter-Efficient and Memory-Efficient Tuning for Vision Transformer: A Disentangled Approach [87.8330887605381]
We show how to adapt a pre-trained Vision Transformer to downstream recognition tasks with only a few learnable parameters.
We synthesize a task-specific query with a learnable and lightweight module, which is independent of the pre-trained model.
Our method achieves state-of-the-art performance under memory constraints, showcasing its applicability in real-world situations.
arXiv Detail & Related papers (2024-07-09T15:45:04Z) - Time-, Memory- and Parameter-Efficient Visual Adaptation [75.28557015773217]
We propose an adaptation method which does not backpropagate gradients through the backbone.
We achieve this by designing a lightweight network in parallel that operates on features from the frozen, pretrained backbone.
arXiv Detail & Related papers (2024-02-05T10:55:47Z) - AdaptIR: Parameter Efficient Multi-task Adaptation for Pre-trained Image
Restoration Models [58.10797482129863]
We propose AdaptIR, a novel parameter efficient transfer learning method for adapting pre-trained restoration models.
Experiments demonstrate that the proposed method can achieve comparable or even better performance than full fine-tuning, while only using 0.6%.
arXiv Detail & Related papers (2023-12-12T14:27:59Z) - ZeroI2V: Zero-Cost Adaptation of Pre-trained Transformers from Image to Video [15.952896909797728]
Adapting image models to the video domain has emerged as an efficient paradigm for solving video recognition tasks.
Recent research is shifting its focus toward parameter-efficient image-to-video adaptation.
We present a new adaptation paradigm (ZeroI2V) to transfer the image transformers to video recognition tasks.
arXiv Detail & Related papers (2023-10-02T16:41:20Z) - AIM: Adapting Image Models for Efficient Video Action Recognition [22.805026175928997]
We propose a method to Adapt pre-trained Image Models (AIM) for efficient video understanding.
By freezing the pre-trained video model and adding a few lightweight Adapters, we introduce spatial adaptation, temporal adaptation and joint adaptation.
We show that our proposed AIM can achieve competitive or even better performance than prior arts with substantially fewer tunable parameters.
arXiv Detail & Related papers (2023-02-06T18:59:17Z) - Towards a Unified View on Visual Parameter-Efficient Transfer Learning [96.99924127527002]
We propose a framework with a unified view called visual-PETL (V-PETL) to investigate the different aspects affecting the trade-off.
An effective scheme Swin-BAPAT derived from the proposed V-PETL framework achieves significantly better performance than the state-of-the-art AdaptFormer-Swin.
arXiv Detail & Related papers (2022-10-03T09:54:39Z) - Pro-tuning: Unified Prompt Tuning for Vision Tasks [133.12978197265596]
Fine-tuning is the de-facto approach to leverage pre-trained vision models to perform downstream tasks.
In this work, we propose parameter-efficient Prompt tuning (Pro-tuning) to adapt frozen vision models to various downstream vision tasks.
arXiv Detail & Related papers (2022-07-28T21:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.