FiT: Parameter Efficient Few-shot Transfer Learning for Personalized and
Federated Image Classification
- URL: http://arxiv.org/abs/2206.08671v1
- Date: Fri, 17 Jun 2022 10:17:20 GMT
- Title: FiT: Parameter Efficient Few-shot Transfer Learning for Personalized and
Federated Image Classification
- Authors: Aliaksandra Shysheya, John Bronskill, Massimiliano Patacchiola,
Sebastian Nowozin, Richard E Turner
- Abstract summary: We develop FiLM Transfer (FiT) which fulfills requirements in the image classification setting.
FiT uses an automatically configured Naive Bayes classifier on top of a fixed backbone that has been pretrained on large image datasets.
We show that FiT achieves better classification accuracy than the state-of-the-art Big Transfer (BiT) algorithm at low-shot and on the challenging VTAB-1k benchmark.
- Score: 47.24770508263431
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern deep learning systems are increasingly deployed in situations such as
personalization and federated learning where it is necessary to support i)
learning on small amounts of data, and ii) communication efficient distributed
training protocols. In this work we develop FiLM Transfer (FiT) which fulfills
these requirements in the image classification setting. FiT uses an
automatically configured Naive Bayes classifier on top of a fixed backbone that
has been pretrained on large image datasets. Parameter efficient FiLM layers
are used to modulate the backbone, shaping the representation for the
downstream task. The network is trained via an episodic fine-tuning protocol.
The approach is parameter efficient which is key for enabling few-shot
learning, inexpensive model updates for personalization, and communication
efficient federated learning. We experiment with FiT on a wide range of
downstream datasets and show that it achieves better classification accuracy
than the state-of-the-art Big Transfer (BiT) algorithm at low-shot and on the
challenging VTAB-1k benchmark, with fewer than 1% of the updateable parameters.
Finally, we demonstrate the parameter efficiency of FiT in distributed low-shot
applications including model personalization and federated learning where model
update size is an important performance metric.
Related papers
- Fisher Information-based Efficient Curriculum Federated Learning with Large Language Models [43.26028399395612]
We propose a Fisher Information-based Efficient Curriculum Federated Learning framework (FibecFed) with two novel methods.
First, we propose a fisher information-based method to adaptively sample data within each device to improve the effectiveness of the FL fine-tuning process.
Second, we dynamically select the proper layers for global aggregation and sparse parameters for local update with LoRA.
arXiv Detail & Related papers (2024-09-30T18:12:18Z) - Federated Learning of Large Language Models with Parameter-Efficient
Prompt Tuning and Adaptive Optimization [71.87335804334616]
Federated learning (FL) is a promising paradigm to enable collaborative model training with decentralized data.
The training process of Large Language Models (LLMs) generally incurs the update of significant parameters.
This paper proposes an efficient partial prompt tuning approach to improve performance and efficiency simultaneously.
arXiv Detail & Related papers (2023-10-23T16:37:59Z) - Prototype-based HyperAdapter for Sample-Efficient Multi-task Tuning [30.251155072822055]
Prototype-based HyperAdapter (PHA) is a novel framework built on the adapter-tuning and hypernetwork.
It introduces an instance-dense retriever and prototypical hypernetwork to generate conditional modules in a sample-efficient manner.
We show that PHA strikes a better trade-off between trainable parameters, accuracy on stream tasks, and sample efficiency.
arXiv Detail & Related papers (2023-10-18T02:42:17Z) - SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models [28.764782216513037]
Federated Learning (FL) can benefit from distributed and private data of the FL edge clients for fine-tuning.
We propose a method called SLoRA, which overcomes the key limitations of LoRA in high heterogeneous data scenarios.
Our experimental results demonstrate that SLoRA achieves performance comparable to full fine-tuning.
arXiv Detail & Related papers (2023-08-12T10:33:57Z) - Exploring Efficient Few-shot Adaptation for Vision Transformers [70.91692521825405]
We propose a novel efficient Transformer Tuning (eTT) method that facilitates finetuning ViTs in the Few-shot Learning tasks.
Key novelties come from the newly presented Attentive Prefix Tuning (APT) and Domain Residual Adapter (DRA)
We conduct extensive experiments to show the efficacy of our model.
arXiv Detail & Related papers (2023-01-06T08:42:05Z) - Prompt Tuning for Parameter-efficient Medical Image Segmentation [79.09285179181225]
We propose and investigate several contributions to achieve a parameter-efficient but effective adaptation for semantic segmentation on two medical imaging datasets.
We pre-train this architecture with a dedicated dense self-supervision scheme based on assignments to online generated prototypes.
We demonstrate that the resulting neural network model is able to attenuate the gap between fully fine-tuned and parameter-efficiently adapted models.
arXiv Detail & Related papers (2022-11-16T21:55:05Z) - Pre-Trained Models for Heterogeneous Information Networks [57.78194356302626]
We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network.
PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets.
arXiv Detail & Related papers (2020-07-07T03:36:28Z) - Parameter-Efficient Transfer from Sequential Behaviors for User Modeling
and Recommendation [111.44445634272235]
In this paper, we develop a parameter efficient transfer learning architecture, termed as PeterRec.
PeterRec allows the pre-trained parameters to remain unaltered during fine-tuning by injecting a series of re-learned neural networks.
We perform extensive experimental ablation to show the effectiveness of the learned user representation in five downstream tasks.
arXiv Detail & Related papers (2020-01-13T14:09:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.