FedDisco: Federated Learning with Discrepancy-Aware Collaboration
- URL: http://arxiv.org/abs/2305.19229v1
- Date: Tue, 30 May 2023 17:20:51 GMT
- Title: FedDisco: Federated Learning with Discrepancy-Aware Collaboration
- Authors: Rui Ye, Mingkai Xu, Jianyu Wang, Chenxin Xu,Siheng Chen, Yanfeng Wang
- Abstract summary: We propose a novel aggregation method, Federated Learning with Discrepancy-aware Collaboration (FedDisco)
Our FedDisco outperforms several state-of-the-art methods and can be easily incorporated with many existing methods to further enhance the performance.
- Score: 41.828780724903744
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work considers the category distribution heterogeneity in federated
learning. This issue is due to biased labeling preferences at multiple clients
and is a typical setting of data heterogeneity. To alleviate this issue, most
previous works consider either regularizing local models or fine-tuning the
global model, while they ignore the adjustment of aggregation weights and
simply assign weights based on the dataset size. However, based on our
empirical observations and theoretical analysis, we find that the dataset size
is not optimal and the discrepancy between local and global category
distributions could be a beneficial and complementary indicator for determining
aggregation weights. We thus propose a novel aggregation method, Federated
Learning with Discrepancy-aware Collaboration (FedDisco), whose aggregation
weights not only involve both the dataset size and the discrepancy value, but
also contribute to a tighter theoretical upper bound of the optimization error.
FedDisco also promotes privacy-preservation, communication and computation
efficiency, as well as modularity. Extensive experiments show that our FedDisco
outperforms several state-of-the-art methods and can be easily incorporated
with many existing methods to further enhance the performance. Our code will be
available at https://github.com/MediaBrain-SJTU/FedDisco.
Related papers
- FedLF: Adaptive Logit Adjustment and Feature Optimization in Federated Long-Tailed Learning [5.23984567704876]
Federated learning offers a paradigm to the challenge of preserving privacy in distributed machine learning.
Traditional approach fails to address the phenomenon of class-wise bias in global long-tailed data.
New method FedLF introduces three modifications in the local training phase: adaptive logit adjustment, continuous class centred optimization, and feature decorrelation.
arXiv Detail & Related papers (2024-09-18T16:25:29Z) - Beyond Similarity: Personalized Federated Recommendation with Composite Aggregation [22.359428566363945]
Federated recommendation aims to collect global knowledge by aggregating local models from massive devices.
Current methods mainly leverage aggregation functions invented by federated vision community to aggregate parameters from similar clients.
We propose a personalized Federated recommendation model with Composite Aggregation (FedCA)
arXiv Detail & Related papers (2024-06-06T10:17:52Z) - Federated Learning under Partially Class-Disjoint Data via Manifold Reshaping [64.58402571292723]
We propose a manifold reshaping approach called FedMR to calibrate the feature space of local training.
We conduct extensive experiments on a range of datasets to demonstrate that our FedMR achieves much higher accuracy and better communication efficiency.
arXiv Detail & Related papers (2024-05-29T10:56:13Z) - Exploiting Label Skews in Federated Learning with Model Concatenation [39.38427550571378]
Federated Learning (FL) has emerged as a promising solution to perform deep learning on different data owners without exchanging raw data.
Among different non-IID types, label skews have been challenging and common in image classification and other tasks.
We propose FedConcat, a simple and effective approach that degrades these local models as the base of the global model.
arXiv Detail & Related papers (2023-12-11T10:44:52Z) - Tackling Computational Heterogeneity in FL: A Few Theoretical Insights [68.8204255655161]
We introduce and analyse a novel aggregation framework that allows for formalizing and tackling computational heterogeneous data.
Proposed aggregation algorithms are extensively analyzed from a theoretical, and an experimental prospective.
arXiv Detail & Related papers (2023-07-12T16:28:21Z) - FedGraph: an Aggregation Method from Graph Perspective [3.1236343261481165]
Federated Learning (FL) has become an effective solution to collaboratively train the model while preserving each client's privacy.
FedAvg is a standard aggregation algorithm that makes the proportion of dataset size of each client as aggregation weight.
We propose FedGraph, which can adjust the aggregation weights adaptively according to the training condition of local models.
arXiv Detail & Related papers (2022-10-06T07:48:50Z) - Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning [112.69497636932955]
Federated learning aims to train models across different clients without the sharing of data for privacy considerations.
We study how data heterogeneity affects the representations of the globally aggregated models.
We propose sc FedDecorr, a novel method that can effectively mitigate dimensional collapse in federated learning.
arXiv Detail & Related papers (2022-10-01T09:04:17Z) - Rethinking Data Heterogeneity in Federated Learning: Introducing a New
Notion and Standard Benchmarks [65.34113135080105]
We show that not only the issue of data heterogeneity in current setups is not necessarily a problem but also in fact it can be beneficial for the FL participants.
Our observations are intuitive.
Our code is available at https://github.com/MMorafah/FL-SC-NIID.
arXiv Detail & Related papers (2022-09-30T17:15:19Z) - FedDRL: Deep Reinforcement Learning-based Adaptive Aggregation for
Non-IID Data in Federated Learning [4.02923738318937]
Uneven distribution of local data across different edge devices (clients) results in slow model training and accuracy reduction in federated learning.
This work introduces a novel non-IID type encountered in real-world datasets, namely cluster-skew.
We propose FedDRL, a novel FL model that employs deep reinforcement learning to adaptively determine each client's impact factor.
arXiv Detail & Related papers (2022-08-04T04:24:16Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.