Cross-Silo Prototypical Calibration for Federated Learning with Non-IID
Data
- URL: http://arxiv.org/abs/2308.03457v1
- Date: Mon, 7 Aug 2023 10:25:54 GMT
- Title: Cross-Silo Prototypical Calibration for Federated Learning with Non-IID
Data
- Authors: Zhuang Qi, Lei Meng, Zitan Chen, Han Hu, Hui Lin, Xiangxu Meng
- Abstract summary: Federated Learning aims to learn a global model on the server side that generalizes to all clients in a privacy-preserving manner.
To address this issue, this paper presents a cross-silo prototypical calibration method (FedCSPC)
FedCSPC takes additional prototype information from the clients to learn a unified feature space on the server side.
- Score: 24.3384892417653
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning aims to learn a global model on the server side that
generalizes to all clients in a privacy-preserving manner, by leveraging the
local models from different clients. Existing solutions focus on either
regularizing the objective functions among clients or improving the aggregation
mechanism for the improved model generalization capability. However, their
performance is typically limited by the dataset biases, such as the
heterogeneous data distributions and the missing classes. To address this
issue, this paper presents a cross-silo prototypical calibration method
(FedCSPC), which takes additional prototype information from the clients to
learn a unified feature space on the server side. Specifically, FedCSPC first
employs the Data Prototypical Modeling (DPM) module to learn data patterns via
clustering to aid calibration. Subsequently, the cross-silo prototypical
calibration (CSPC) module develops an augmented contrastive learning method to
improve the robustness of the calibration, which can effectively project
cross-source features into a consistent space while maintaining clear decision
boundaries. Moreover, the CSPC module's ease of implementation and
plug-and-play characteristics make it even more remarkable. Experiments were
conducted on four datasets in terms of performance comparison, ablation study,
in-depth analysis and case study, and the results verified that FedCSPC is
capable of learning the consistent features across different data sources of
the same class under the guidance of calibrated model, which leads to better
performance than the state-of-the-art methods. The source codes have been
released at https://github.com/qizhuang-qz/FedCSPC.
Related papers
- FedReMa: Improving Personalized Federated Learning via Leveraging the Most Relevant Clients [13.98392319567057]
Federated Learning (FL) is a distributed machine learning paradigm that achieves a globally robust model through decentralized computation and periodic model synthesis.
Despite their wide adoption, existing FL and PFL works have yet to comprehensively address the class-imbalance issue.
We propose FedReMa, an efficient PFL algorithm that can tackle class-imbalance by utilizing an adaptive inter-client co-learning approach.
arXiv Detail & Related papers (2024-11-04T05:44:28Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Decoupled Federated Learning on Long-Tailed and Non-IID data with
Feature Statistics [20.781607752797445]
We propose a two-stage Decoupled Federated learning framework using Feature Statistics (DFL-FS)
In the first stage, the server estimates the client's class coverage distributions through masked local feature statistics clustering.
In the second stage, DFL-FS employs federated feature regeneration based on global feature statistics to enhance the model's adaptability to long-tailed data distributions.
arXiv Detail & Related papers (2024-03-13T09:24:59Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Universal Domain Adaptation from Foundation Models: A Baseline Study [58.51162198585434]
We make empirical studies of state-of-the-art UniDA methods using foundation models.
We introduce textitCLIP distillation, a parameter-free method specifically designed to distill target knowledge from CLIP models.
Although simple, our method outperforms previous approaches in most benchmark tasks.
arXiv Detail & Related papers (2023-05-18T16:28:29Z) - A Personalized Federated Learning Algorithm: an Application in Anomaly
Detection [0.6700873164609007]
Federated Learning (FL) has recently emerged as a promising method to overcome data privacy and transmission issues.
In FL, datasets collected from different devices or sensors are used to train local models (clients) each of which shares its learning with a centralized model (server)
This paper proposes a novel Personalized FedAvg (PC-FedAvg) which aims to control weights communication and aggregation augmented with a tailored learning algorithm to personalize the resulting models at each client.
arXiv Detail & Related papers (2021-11-04T04:57:11Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.