LINDT: Tackling Negative Federated Learning with Local Adaptation
- URL: http://arxiv.org/abs/2011.11160v1
- Date: Mon, 23 Nov 2020 01:31:18 GMT
- Title: LINDT: Tackling Negative Federated Learning with Local Adaptation
- Authors: Hong Lin, Lidan Shou, Ke Chen, Gang Chen, Sai Wu
- Abstract summary: We propose a novel framework called LINDT for tackling NFL in run-time.
We introduce a metric for detecting NFL from the server.
Experiment results show that the proposed approach can significantly improve the performance of FL on local data.
- Score: 18.33409148798824
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a promising distributed learning paradigm, which
allows a number of data owners (also called clients) to collaboratively learn a
shared model without disclosing each client's data. However, FL may fail to
proceed properly, amid a state that we call negative federated learning (NFL).
This paper addresses the problem of negative federated learning. We formulate a
rigorous definition of NFL and analyze its essential cause. We propose a novel
framework called LINDT for tackling NFL in run-time. The framework can
potentially work with any neural-network-based FL systems for NFL detection and
recovery. Specifically, we introduce a metric for detecting NFL from the
server. On occasion of NFL recovery, the framework makes adaptation to the
federated model on each client's local data by learning a Layer-wise
Intertwined Dual-model. Experiment results show that the proposed approach can
significantly improve the performance of FL on local data in various scenarios
of NFL.
Related papers
- FL-GUARD: A Holistic Framework for Run-Time Detection and Recovery of
Negative Federated Learning [20.681802937080523]
Federated learning (FL) is a promising approach for learning a model from data distributed on massive clients without exposing data privacy.
FL may fail to function appropriately when the federation is not ideal, amid an unhealthy state called Negative Federated Learning (NFL)
This paper introduces FL-GUARD, a holistic framework that can be employed on any FL system for tackling NFL in a run-time paradigm.
arXiv Detail & Related papers (2024-03-07T01:52:05Z) - PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - Federated Learning of Shareable Bases for Personalization-Friendly Image
Classification [54.72892987840267]
FedBasis learns a set of few shareable basis'' models, which can be linearly combined to form personalized models for clients.
Specifically for a new client, only a small set of combination coefficients, not the model weights, needs to be learned.
To demonstrate the effectiveness and applicability of FedBasis, we also present a more practical PFL testbed for image classification.
arXiv Detail & Related papers (2023-04-16T20:19:18Z) - Client Selection for Generalization in Accelerated Federated Learning: A
Multi-Armed Bandit Approach [20.300740276237523]
Federated learning (FL) is an emerging machine learning (ML) paradigm used to train models across multiple nodes (i.e., clients) holding local data sets.
We develop a novel algorithm to achieve this goal, dubbed Bandit Scheduling for FL (BSFL)
arXiv Detail & Related papers (2023-03-18T09:45:58Z) - FLamby: Datasets and Benchmarks for Cross-Silo Federated Learning in
Realistic Healthcare Settings [51.09574369310246]
Federated Learning (FL) is a novel approach enabling several clients holding sensitive data to collaboratively train machine learning models.
We propose a novel cross-silo dataset suite focused on healthcare, FLamby, to bridge the gap between theory and practice of cross-silo FL.
Our flexible and modular suite allows researchers to easily download datasets, reproduce results and re-use the different components for their research.
arXiv Detail & Related papers (2022-10-10T12:17:30Z) - Federated Learning from Only Unlabeled Data with
Class-Conditional-Sharing Clients [98.22390453672499]
Supervised federated learning (FL) enables multiple clients to share the trained model without sharing their labeled data.
We propose federation of unsupervised learning (FedUL), where the unlabeled data are transformed into surrogate labeled data for each of the clients.
arXiv Detail & Related papers (2022-04-07T09:12:00Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Inference-Time Personalized Federated Learning [17.60724466773559]
Inference-Time PFL (IT-PFL) is where a model trained on a set of clients needs to be later evaluated on novel unlabeled clients at inference time.
We propose a novel approach to this problem IT-PFL-HN, based on a hypernetwork module and an encoder module.
We find that IT-PFL-HN generalizes better than current FL and PFL methods, especially when the novel client has a large domain shift.
arXiv Detail & Related papers (2021-11-16T10:57:20Z) - Splitfed learning without client-side synchronization: Analyzing
client-side split network portion size to overall performance [4.689140226545214]
Federated Learning (FL), Split Learning (SL), and SplitFed Learning (SFL) are three recent developments in distributed machine learning.
This paper studies SFL without client-side model synchronization.
It provides only 1%-2% better accuracy than Multi-head Split Learning on the MNIST test set.
arXiv Detail & Related papers (2021-09-19T22:57:23Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - TiFL: A Tier-based Federated Learning System [17.74678728280232]
Federated Learning (FL) enables learning a shared model across many clients without violating the privacy requirements.
We conduct a case study to show that heterogeneity in resource and data has a significant impact on training time and model accuracy in conventional FL systems.
We propose TiFL, a Tier-based Federated Learning System, which divides clients into tiers based on their training performance and selects clients from the same tier in each training round.
arXiv Detail & Related papers (2020-01-25T01:40:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.