Reliable Federated Disentangling Network for Non-IID Domain Feature
- URL: http://arxiv.org/abs/2301.12798v3
- Date: Tue, 19 Sep 2023 16:38:25 GMT
- Title: Reliable Federated Disentangling Network for Non-IID Domain Feature
- Authors: Meng Wang, Kai Yu, Chun-Mei Feng, Yiming Qian, Ke Zou, Lianyu Wang,
Rick Siow Mong Goh, Yong Liu, Huazhu Fu
- Abstract summary: In this paper, we propose a novel reliable federated disentangling network, termed RFedDis.
To the best of our knowledge, our proposed RFedDis is the first work to develop an FL approach based on evidential uncertainty combined with feature disentangling.
Our proposed RFedDis provides outstanding performance with a high degree of reliability as compared to other state-of-the-art FL approaches.
- Score: 62.73267904147804
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL), as an effective decentralized distributed learning
approach, enables multiple institutions to jointly train a model without
sharing their local data. However, the domain feature shift caused by different
acquisition devices/clients substantially degrades the performance of the FL
model. Furthermore, most existing FL approaches aim to improve accuracy without
considering reliability (e.g., confidence or uncertainty). The predictions are
thus unreliable when deployed in safety-critical applications. Therefore,
aiming at improving the performance of FL in non-Domain feature issues while
enabling the model more reliable. In this paper, we propose a novel reliable
federated disentangling network, termed RFedDis, which utilizes feature
disentangling to enable the ability to capture the global domain-invariant
cross-client representation and preserve local client-specific feature
learning. Meanwhile, to effectively integrate the decoupled features, an
uncertainty-aware decision fusion is also introduced to guide the network for
dynamically integrating the decoupled features at the evidence level, while
producing a reliable prediction with an estimated uncertainty. To the best of
our knowledge, our proposed RFedDis is the first work to develop an FL approach
based on evidential uncertainty combined with feature disentangling, which
enhances the performance and reliability of FL in non-IID domain features.
Extensive experimental results show that our proposed RFedDis provides
outstanding performance with a high degree of reliability as compared to other
state-of-the-art FL approaches.
Related papers
- TPFL: A Trustworthy Personalized Federated Learning Framework via Subjective Logic [13.079535924498977]
Federated learning (FL) enables collaborative model training across distributed clients while preserving data privacy.
Most FL approaches focusing solely on privacy protection fall short in scenarios where trustworthiness is crucial.
We introduce Trustworthy Personalized Federated Learning framework designed for classification tasks via subjective logic.
arXiv Detail & Related papers (2024-10-16T07:33:29Z) - Parametric Feature Transfer: One-shot Federated Learning with Foundation
Models [14.97955440815159]
In one-shot federated learning, clients collaboratively train a global model in a single round of communication.
This paper introduces FedPFT, a methodology that harnesses the transferability of foundation models to enhance both accuracy and communication efficiency in one-shot FL.
arXiv Detail & Related papers (2024-02-02T19:34:46Z) - AEDFL: Efficient Asynchronous Decentralized Federated Learning with
Heterogeneous Devices [61.66943750584406]
We propose an Asynchronous Efficient Decentralized FL framework, i.e., AEDFL, in heterogeneous environments.
First, we propose an asynchronous FL system model with an efficient model aggregation method for improving the FL convergence.
Second, we propose a dynamic staleness-aware model update approach to achieve superior accuracy.
Third, we propose an adaptive sparse training method to reduce communication and computation costs without significant accuracy degradation.
arXiv Detail & Related papers (2023-12-18T05:18:17Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - Enabling Quartile-based Estimated-Mean Gradient Aggregation As Baseline
for Federated Image Classifications [5.5099914877576985]
Federated Learning (FL) has revolutionized how we train deep neural networks by enabling decentralized collaboration while safeguarding sensitive data and improving model performance.
This paper introduces an innovative solution named Estimated Mean Aggregation (EMA) that not only addresses these challenges but also provides a fundamental reference point as a $mathsfbaseline$ for advanced aggregation techniques in FL systems.
arXiv Detail & Related papers (2023-09-21T17:17:28Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - Disentangled Federated Learning for Tackling Attributes Skew via
Invariant Aggregation and Diversity Transferring [104.19414150171472]
Attributes skews the current federated learning (FL) frameworks from consistent optimization directions among the clients.
We propose disentangled federated learning (DFL) to disentangle the domain-specific and cross-invariant attributes into two complementary branches.
Experiments verify that DFL facilitates FL with higher performance, better interpretability, and faster convergence rate, compared with SOTA FL methods.
arXiv Detail & Related papers (2022-06-14T13:12:12Z) - SAFARI: Sparsity enabled Federated Learning with Limited and Unreliable
Communications [23.78596067797334]
Federated learning (FL) enables edge devices to collaboratively learn a model in a distributed fashion.
We propose a sparsity enabled FL framework with both communication efficiency and bias reduction, termed as SAFARI.
It makes novel use of a similarity among client models to rectify and compensate for bias that is resulted from unreliable communications.
arXiv Detail & Related papers (2022-04-05T16:26:36Z) - Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees [49.91477656517431]
Quantization-based solvers have been widely adopted in Federated Learning (FL)
No existing methods enjoy all the aforementioned properties.
We propose an intuitively-simple yet theoretically-simple method based on SIGNSGD to bridge the gap.
arXiv Detail & Related papers (2020-02-25T15:12:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.