Federated Few-Shot Learning with Adversarial Learning
- URL: http://arxiv.org/abs/2104.00365v1
- Date: Thu, 1 Apr 2021 09:44:57 GMT
- Title: Federated Few-Shot Learning with Adversarial Learning
- Authors: Chenyou Fan and Jianwei Huang
- Abstract summary: We propose a few-shot learning framework to learn a few-shot classification model that can classify unseen data classes with only a few labeled samples.
We show our approaches outperform baselines by more than 10% in learning vision tasks and 5% in language tasks.
- Score: 30.905239262227
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We are interested in developing a unified machine learning model over many
mobile devices for practical learning tasks, where each device only has very
few training data. This is a commonly encountered situation in mobile computing
scenarios, where data is scarce and distributed while the tasks are distinct.
In this paper, we propose a federated few-shot learning (FedFSL) framework to
learn a few-shot classification model that can classify unseen data classes
with only a few labeled samples. With the federated learning strategy, FedFSL
can utilize many data sources while keeping data privacy and communication
efficiency. There are two technical challenges: 1) directly using the existing
federated learning approach may lead to misaligned decision boundaries produced
by client models, and 2) constraining the decision boundaries to be similar
over clients would overfit to training tasks but not adapt well to unseen
tasks. To address these issues, we propose to regularize local updates by
minimizing the divergence of client models. We also formulate the training in
an adversarial fashion and optimize the client models to produce a
discriminative feature space that can better represent unseen data samples. We
demonstrate the intuitions and conduct experiments to show our approaches
outperform baselines by more than 10% in learning vision tasks and 5% in
language tasks.
Related papers
- Update Selective Parameters: Federated Machine Unlearning Based on Model Explanation [46.86767774669831]
We propose a more effective and efficient federated unlearning scheme based on the concept of model explanation.
We select the most influential channels within an already-trained model for the data that need to be unlearned.
arXiv Detail & Related papers (2024-06-18T11:43:20Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - FedYolo: Augmenting Federated Learning with Pretrained Transformers [61.56476056444933]
In this work, we investigate pretrained transformers (PTF) to achieve on-device learning goals.
We show that larger scale shrinks the accuracy gaps between alternative approaches and improves robustness.
Finally, it enables clients to solve multiple unrelated tasks simultaneously using a single PTF.
arXiv Detail & Related papers (2023-07-10T21:08:52Z) - Federated Few-shot Learning [40.08636228692432]
Federated Learning (FL) enables multiple clients to collaboratively learn a machine learning model without exchanging their own local data.
In practice, certain clients may only contain a limited number of samples (i.e., few-shot samples)
We propose a novel federated few-shot learning framework with two separately updated models and dedicated training strategies.
arXiv Detail & Related papers (2023-06-17T02:25:56Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Federated Pruning: Improving Neural Network Efficiency with Federated
Learning [24.36174705715827]
We propose Federated Pruning to train a reduced model under the federated setting.
We explore different pruning schemes and provide empirical evidence of the effectiveness of our methods.
arXiv Detail & Related papers (2022-09-14T00:48:37Z) - Federated Learning of Neural ODE Models with Different Iteration Counts [0.9444784653236158]
Federated learning is a distributed machine learning approach in which clients train models locally with their own data and upload them to a server so that their trained results are shared between them without uploading raw data to the server.
In this paper, we utilize Neural ODE based models for federated learning.
We show that our approach can reduce communication size by up to 92.4% compared with a baseline ResNet model using CIFAR-10 dataset.
arXiv Detail & Related papers (2022-08-19T17:57:32Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Federated Generative Adversarial Learning [13.543039993168735]
Generative adversarial networks (GANs) have achieved advancement in various real-world applications.
GANs are suffering from data limitation problems in real cases.
We propose a novel generative learning scheme utilizing a federated learning framework.
arXiv Detail & Related papers (2020-05-07T23:06:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.