Federated Few-shot Learning
- URL: http://arxiv.org/abs/2306.10234v3
- Date: Sun, 2 Jul 2023 14:11:00 GMT
- Title: Federated Few-shot Learning
- Authors: Song Wang, Xingbo Fu, Kaize Ding, Chen Chen, Huiyuan Chen, Jundong Li
- Abstract summary: Federated Learning (FL) enables multiple clients to collaboratively learn a machine learning model without exchanging their own local data.
In practice, certain clients may only contain a limited number of samples (i.e., few-shot samples)
We propose a novel federated few-shot learning framework with two separately updated models and dedicated training strategies.
- Score: 40.08636228692432
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) enables multiple clients to collaboratively learn a
machine learning model without exchanging their own local data. In this way,
the server can exploit the computational power of all clients and train the
model on a larger set of data samples among all clients. Although such a
mechanism is proven to be effective in various fields, existing works generally
assume that each client preserves sufficient data for training. In practice,
however, certain clients may only contain a limited number of samples (i.e.,
few-shot samples). For example, the available photo data taken by a specific
user with a new mobile device is relatively rare. In this scenario, existing FL
efforts typically encounter a significant performance drop on these clients.
Therefore, it is urgent to develop a few-shot model that can generalize to
clients with limited data under the FL scenario. In this paper, we refer to
this novel problem as federated few-shot learning. Nevertheless, the problem
remains challenging due to two major reasons: the global data variance among
clients (i.e., the difference in data distributions among clients) and the
local data insufficiency in each client (i.e., the lack of adequate local data
for training). To overcome these two challenges, we propose a novel federated
few-shot learning framework with two separately updated models and dedicated
training strategies to reduce the adverse impact of global data variance and
local data insufficiency. Extensive experiments on four prevalent datasets that
cover news articles and images validate the effectiveness of our framework
compared with the state-of-the-art baselines. Our code is provided at
https://github.com/SongW-SW/F2L.
Related papers
- Efficient Federated Unlearning under Plausible Deniability [1.795561427808824]
Machine unlearning addresses this by modifying the ML parameters in order to forget the influence of a specific data point on its weights.
Recent literature has highlighted that the contribution from data point(s) can be forged with some other data points in the dataset with probability close to one.
This paper introduces an efficient way to achieve federated unlearning, by employing a privacy model which allows the FL server to plausibly deny the client's participation.
arXiv Detail & Related papers (2024-10-13T18:08:24Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - No One Left Behind: Inclusive Federated Learning over Heterogeneous
Devices [79.16481453598266]
We propose InclusiveFL, a client-inclusive federated learning method to handle this problem.
The core idea of InclusiveFL is to assign models of different sizes to clients with different computing capabilities.
We also propose an effective method to share the knowledge among multiple local models with different sizes.
arXiv Detail & Related papers (2022-02-16T13:03:27Z) - Distributed Unsupervised Visual Representation Learning with Fused
Features [13.935997509072669]
Federated learning (FL) enables distributed clients to learn a shared model for prediction while keeping the training data local on each client.
We propose a federated contrastive learning framework consisting of two approaches: feature fusion and neighborhood matching.
It outperforms other methods by 11% on IID data and matches the performance of centralized learning.
arXiv Detail & Related papers (2021-11-21T08:36:31Z) - Unifying Distillation with Personalization in Federated Learning [1.8262547855491458]
Federated learning (FL) is a decentralized privacy-preserving learning technique in which clients learn a joint collaborative model through a central aggregator without sharing their data.
In this setting, all clients learn a single common predictor (FedAvg), which does not generalize well on each client's local data due to the statistical data heterogeneity among clients.
In this paper, we address this problem with PersFL, a two-stage personalized learning algorithm.
In the first stage, PersFL finds the optimal teacher model of each client during the FL training phase. In the second stage, PersFL distills the useful knowledge from
arXiv Detail & Related papers (2021-05-31T17:54:29Z) - Federated Few-Shot Learning with Adversarial Learning [30.905239262227]
We propose a few-shot learning framework to learn a few-shot classification model that can classify unseen data classes with only a few labeled samples.
We show our approaches outperform baselines by more than 10% in learning vision tasks and 5% in language tasks.
arXiv Detail & Related papers (2021-04-01T09:44:57Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z) - FedProf: Optimizing Federated Learning with Dynamic Data Profiling [9.74942069718191]
Federated Learning (FL) has shown great potential as a privacy-preserving solution to learning from decentralized data.
A large proportion of the clients are probably in possession of only low-quality data that are biased, noisy or even irrelevant.
We propose a novel approach to optimizing FL under such circumstances without breaching data privacy.
arXiv Detail & Related papers (2021-02-02T20:10:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.