Federated Bayesian Network Ensembles
- URL: http://arxiv.org/abs/2402.12142v1
- Date: Mon, 19 Feb 2024 13:52:37 GMT
- Title: Federated Bayesian Network Ensembles
- Authors: Florian van Daalen, Lianne Ippel, Andre Dekker, Inigo Bermejo
- Abstract summary: Federated learning allows us to run machine learning algorithms on decentralized data when data sharing is not permitted due to privacy concerns.
We show that FBNE is a potentially useful tool within the federated learning toolbox.
We discuss the advantages and disadvantages of this approach in terms of time complexity, model accuracy, privacy protection, and model interpretability.
- Score: 3.24530181403525
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning allows us to run machine learning algorithms on
decentralized data when data sharing is not permitted due to privacy concerns.
Ensemble-based learning works by training multiple (weak) classifiers whose
output is aggregated. Federated ensembles are ensembles applied to a federated
setting, where each classifier in the ensemble is trained on one data location.
In this article, we explore the use of federated ensembles of Bayesian
networks (FBNE) in a range of experiments and compare their performance with
locally trained models and models trained with VertiBayes, a federated learning
algorithm to train Bayesian networks from decentralized data. Our results show
that FBNE outperforms local models and provides a significant increase in
training speed compared with VertiBayes while maintaining a similar performance
in most settings, among other advantages. We show that FBNE is a potentially
useful tool within the federated learning toolbox, especially when local
populations are heavily biased, or there is a strong imbalance in population
size across parties. We discuss the advantages and disadvantages of this
approach in terms of time complexity, model accuracy, privacy protection, and
model interpretability.
Related papers
- FedClust: Tackling Data Heterogeneity in Federated Learning through Weight-Driven Client Clustering [26.478852701376294]
Federated learning (FL) is an emerging distributed machine learning paradigm.
One of the major challenges in FL is the presence of uneven data distributions across client devices.
We propose em FedClust, a novel approach for CFL that leverages the correlation between local model weights and the data distribution of clients.
arXiv Detail & Related papers (2024-07-09T02:47:16Z) - Integrating Local Real Data with Global Gradient Prototypes for
Classifier Re-Balancing in Federated Long-Tailed Learning [60.41501515192088]
Federated Learning (FL) has become a popular distributed learning paradigm that involves multiple clients training a global model collaboratively.
The data samples usually follow a long-tailed distribution in the real world, and FL on the decentralized and long-tailed data yields a poorly-behaved global model.
In this work, we integrate the local real data with the global gradient prototypes to form the local balanced datasets.
arXiv Detail & Related papers (2023-01-25T03:18:10Z) - Certified Robustness in Federated Learning [54.03574895808258]
We study the interplay between federated training, personalization, and certified robustness.
We find that the simple federated averaging technique is effective in building not only more accurate, but also more certifiably-robust models.
arXiv Detail & Related papers (2022-06-06T12:10:53Z) - Comparative assessment of federated and centralized machine learning [0.0]
Federated Learning (FL) is a privacy preserving machine learning scheme, where training happens with data federated across devices.
In this paper, we discuss the various factors that affect the federated learning training, because of the non-IID distributed nature of the data.
We show that federated learning does have an advantage in cost when the model sizes to be trained are not reasonably large.
arXiv Detail & Related papers (2022-02-03T11:20:47Z) - Federated Learning from Small Datasets [48.879172201462445]
Federated learning allows multiple parties to collaboratively train a joint model without sharing local data.
We propose a novel approach that intertwines model aggregations with permutations of local models.
The permutations expose each local model to a daisy chain of local datasets resulting in more efficient training in data-sparse domains.
arXiv Detail & Related papers (2021-10-07T13:49:23Z) - Decentralized federated learning of deep neural networks on non-iid data [0.6335848702857039]
We tackle the non-problem of learning a personalized deep learning model in a decentralized setting.
We propose a method named Performance-Based Neighbor Selection (PENS) where clients with similar data detect each other and cooperate.
PENS is able to achieve higher accuracies as compared to strong baselines.
arXiv Detail & Related papers (2021-07-18T19:05:44Z) - FedBN: Federated Learning on Non-IID Features via Local Batch
Normalization [23.519212374186232]
The emerging paradigm of federated learning (FL) strives to enable collaborative training of deep models on the network edge without centrally aggregating raw data.
We propose an effective method that uses local batch normalization to alleviate the feature shift before averaging models.
The resulting scheme, called FedBN, outperforms both classical FedAvg and the state-of-the-art for non-iid data.
arXiv Detail & Related papers (2021-02-15T16:04:10Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.