Towards Causal Federated Learning For Enhanced Robustness and Privacy
- URL: http://arxiv.org/abs/2104.06557v1
- Date: Wed, 14 Apr 2021 00:08:45 GMT
- Title: Towards Causal Federated Learning For Enhanced Robustness and Privacy
- Authors: Sreya Francis, Irene Tenison, Irina Rish
- Abstract summary: Federated learning is an emerging privacy-preserving distributed machine learning approach.
Data samples across all participating clients are usually not independent and identically distributed.
In this paper, we propose an approach for learning invariant (causal) features common to all participating clients in a federated learning setup.
- Score: 5.858642952428615
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning is an emerging privacy-preserving distributed machine
learning approach to building a shared model by performing distributed training
locally on participating devices (clients) and aggregating the local models
into a global one. As this approach prevents data collection and aggregation,
it helps in reducing associated privacy risks to a great extent. However, the
data samples across all participating clients are usually not independent and
identically distributed (non-iid), and Out of Distribution(OOD) generalization
for the learned models can be poor. Besides this challenge, federated learning
also remains vulnerable to various attacks on security wherein a few malicious
participating entities work towards inserting backdoors, degrading the
generated aggregated model as well as inferring the data owned by participating
entities. In this paper, we propose an approach for learning invariant (causal)
features common to all participating clients in a federated learning setup and
analyze empirically how it enhances the Out of Distribution (OOD) accuracy as
well as the privacy of the final learned model.
Related papers
- Hierarchical Knowledge Structuring for Effective Federated Learning in Heterogeneous Environments [0.6144680854063939]
Federated learning enables collaborative model training across distributed entities while maintaining individual data privacy.
Recent efforts leverage logit-based knowledge aggregation and distillation to overcome these issues.
We propose a Hierarchical Knowledge Structuring (HKS) framework that formulates sample logits into a multi-granularity codebook.
arXiv Detail & Related papers (2025-04-04T15:06:02Z) - Asynchronous Personalized Federated Learning through Global Memorization [16.630360485032163]
Federated Learning offers a privacy preserving solution by enabling collaborative model training across decentralized devices without centralizing sensitive data.
We propose the Asynchronous Personalized Federated Learning framework, which empowers clients to develop personalized models using a server side semantic generator.
This generator, trained via data free knowledge transfer under global model supervision, enhances client data diversity by producing both seen and unseen samples.
To counter the risks of synthetic data impairing training, we introduce a decoupled model method, ensuring robust personalization.
arXiv Detail & Related papers (2025-03-01T09:00:33Z) - FedGen: Generalizable Federated Learning for Sequential Data [8.784435748969806]
In many real-world distributed settings, spurious correlations exist due to biases and data sampling issues.
We present a generalizable federated learning framework called FedGen, which allows clients to identify and distinguish between spurious and invariant features.
We show that FedGen results in models that achieve significantly better generalization and can outperform the accuracy of current federated learning approaches by over 24%.
arXiv Detail & Related papers (2022-11-03T15:48:14Z) - Certified Robustness in Federated Learning [54.03574895808258]
We study the interplay between federated training, personalization, and certified robustness.
We find that the simple federated averaging technique is effective in building not only more accurate, but also more certifiably-robust models.
arXiv Detail & Related papers (2022-06-06T12:10:53Z) - Personalization Improves Privacy-Accuracy Tradeoffs in Federated
Optimization [57.98426940386627]
We show that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy.
We illustrate our theoretical results with experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-02-10T20:44:44Z) - FedRAD: Federated Robust Adaptive Distillation [7.775374800382709]
Collaborative learning framework by typically aggregating model updates is vulnerable to model poisoning attacks from adversarial clients.
We propose a novel robust aggregation method, Federated Robust Adaptive Distillation (FedRAD), to detect adversaries and robustly aggregate local models.
The results show that FedRAD outperforms all other aggregators in the presence of adversaries, as well as in heterogeneous data distributions.
arXiv Detail & Related papers (2021-12-02T16:50:57Z) - FedH2L: Federated Learning with Model and Statistical Heterogeneity [75.61234545520611]
Federated learning (FL) enables distributed participants to collectively learn a strong global model without sacrificing their individual data privacy.
We introduce FedH2L, which is agnostic to both the model architecture and robust to different data distributions across participants.
In contrast to approaches sharing parameters or gradients, FedH2L relies on mutual distillation, exchanging only posteriors on a shared seed set between participants in a decentralized manner.
arXiv Detail & Related papers (2021-01-27T10:10:18Z) - Federated Learning in Adversarial Settings [0.8701566919381224]
Federated learning scheme provides different trade-offs between robustness, privacy, bandwidth efficiency, and model accuracy.
We show that this extension performs as efficiently as the non-private but robust scheme, even with stringent privacy requirements.
This suggests a possible fundamental trade-off between Differential Privacy and robustness.
arXiv Detail & Related papers (2020-10-15T14:57:02Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - Decentralised Learning from Independent Multi-Domain Labels for Person
Re-Identification [69.29602103582782]
Deep learning has been successful for many computer vision tasks due to the availability of shared and centralised large-scale training data.
However, increasing awareness of privacy concerns poses new challenges to deep learning, especially for person re-identification (Re-ID)
We propose a novel paradigm called Federated Person Re-Identification (FedReID) to construct a generalisable global model (a central server) by simultaneously learning with multiple privacy-preserved local models (local clients)
This client-server collaborative learning process is iteratively performed under privacy control, enabling FedReID to realise decentralised learning without sharing distributed data nor collecting any
arXiv Detail & Related papers (2020-06-07T13:32:33Z) - Survey of Personalization Techniques for Federated Learning [0.08594140167290096]
Federated learning enables machine learning models to learn from private decentralized data without compromising privacy.
This paper highlights the need for personalization and surveys recent research on this topic.
arXiv Detail & Related papers (2020-03-19T10:47:55Z) - Think Locally, Act Globally: Federated Learning with Local and Global
Representations [92.68484710504666]
Federated learning is a method of training models on private data distributed over multiple devices.
We propose a new federated learning algorithm that jointly learns compact local representations on each device.
We also evaluate on the task of personalized mood prediction from real-world mobile data where privacy is key.
arXiv Detail & Related papers (2020-01-06T12:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.