Joint Local Relational Augmentation and Global Nash Equilibrium for
Federated Learning with Non-IID Data
- URL: http://arxiv.org/abs/2308.11646v1
- Date: Thu, 17 Aug 2023 06:17:51 GMT
- Title: Joint Local Relational Augmentation and Global Nash Equilibrium for
Federated Learning with Non-IID Data
- Authors: Xinting Liao, Chaochao Chen, Weiming Liu, Pengyang Zhou, Huabin Zhu,
Shuheng Shen, Weiqiang Wang, Mengling Hu, Yanchao Tan, and Xiaolin Zheng
- Abstract summary: Federated learning (FL) is a distributed machine learning paradigm that needs collaboration between a server and a series of clients with decentralized data.
We propose FedRANE, which consists of two main modules, local relational augmentation (LRA) and global Nash equilibrium (GNE) to resolve intra- and inter-client inconsistency simultaneously.
- Score: 36.426794300280854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a distributed machine learning paradigm that needs
collaboration between a server and a series of clients with decentralized data.
To make FL effective in real-world applications, existing work devotes to
improving the modeling of decentralized data with non-independent and identical
distributions (non-IID). In non-IID settings, there are intra-client
inconsistency that comes from the imbalanced data modeling, and inter-client
inconsistency among heterogeneous client distributions, which not only hinders
sufficient representation of the minority data, but also brings discrepant
model deviations. However, previous work overlooks to tackle the above two
coupling inconsistencies together. In this work, we propose FedRANE, which
consists of two main modules, i.e., local relational augmentation (LRA) and
global Nash equilibrium (GNE), to resolve intra- and inter-client inconsistency
simultaneously. Specifically, in each client, LRA mines the similarity
relations among different data samples and enhances the minority sample
representations with their neighbors using attentive message passing. In
server, GNE reaches an agreement among inconsistent and discrepant model
deviations from clients to server, which encourages the global model to update
in the direction of global optimum without breaking down the clients
optimization toward their local optimums. We conduct extensive experiments on
four benchmark datasets to show the superiority of FedRANE in enhancing the
performance of FL with non-IID data.
Related papers
- FedEP: Tailoring Attention to Heterogeneous Data Distribution with Entropy Pooling for Decentralized Federated Learning [8.576433180938004]
This paper proposes a novel DFL aggregation algorithm, Federated Entropy Pooling (FedEP)
FedEP mitigates the client drift problem by incorporating the statistical characteristics of local distributions instead of any actual data.
Experiments have demonstrated that FedEP can achieve faster convergence and show higher test performance than state-of-the-art approaches.
arXiv Detail & Related papers (2024-10-10T07:39:15Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Federated Learning for Semantic Parsing: Task Formulation, Evaluation
Setup, New Algorithms [29.636944156801327]
Multiple clients collaboratively train one global model without sharing their semantic parsing data.
Lorar adjusts each client's contribution to the global model update based on its training loss reduction during each round.
Clients with smaller datasets enjoy larger performance gains.
arXiv Detail & Related papers (2023-05-26T19:25:49Z) - FedDC: Federated Learning with Non-IID Data via Local Drift Decoupling
and Correction [48.85303253333453]
Federated learning (FL) allows multiple clients to collectively train a high-performance global model without sharing their private data.
We propose a novel federated learning algorithm with local drift decoupling and correction (FedDC)
Our FedDC only introduces lightweight modifications in the local training phase, in which each client utilizes an auxiliary local drift variable to track the gap between the local model parameter and the global model parameters.
Experiment results and analysis demonstrate that FedDC yields expediting convergence and better performance on various image classification tasks.
arXiv Detail & Related papers (2022-03-22T14:06:26Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.