Towards Federated Bayesian Network Structure Learning with Continuous
Optimization
- URL: http://arxiv.org/abs/2110.09356v1
- Date: Mon, 18 Oct 2021 14:36:05 GMT
- Title: Towards Federated Bayesian Network Structure Learning with Continuous
Optimization
- Authors: Ignavier Ng, Kun Zhang
- Abstract summary: We present a cross-silo federated learning approach to estimate the structure of Bayesian network.
We develop a distributed structure learning method based on continuous optimization.
- Score: 14.779035801521717
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditionally, Bayesian network structure learning is often carried out at a
central site, in which all data is gathered. However, in practice, data may be
distributed across different parties (e.g., companies, devices) who intend to
collectively learn a Bayesian network, but are not willing to disclose
information related to their data owing to privacy or security concerns. In
this work, we present a cross-silo federated learning approach to estimate the
structure of Bayesian network from data that is horizontally partitioned across
different parties. We develop a distributed structure learning method based on
continuous optimization, using the alternating direction method of multipliers
(ADMM), such that only the model parameters have to be exchanged during the
optimization process. We demonstrate the flexibility of our approach by
adopting it for both linear and nonlinear cases. Experimental results on
synthetic and real datasets show that it achieves an improved performance over
the other methods, especially when there is a relatively large number of
clients and each has a limited sample size.
Related papers
- Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - VertiBayes: Learning Bayesian network parameters from vertically partitioned data with missing values [2.9707233220536313]
Federated learning makes it possible to train a machine learning model on decentralized data.
We propose a novel method called VertiBayes to train Bayesian networks on vertically partitioned data.
We experimentally show our approach produces models comparable to those learnt using traditional algorithms.
arXiv Detail & Related papers (2022-10-31T11:13:35Z) - A Procedural World Generation Framework for Systematic Evaluation of
Continual Learning [2.599882743586164]
We introduce a computer graphics simulation framework that repeatedly renders only upcoming urban scene fragments.
At its core lies a modular parametric generative model with adaptable generative factors.
arXiv Detail & Related papers (2021-06-04T16:31:43Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
Heterogeneous Data [77.88594632644347]
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks.
In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge.
We propose a novel momentum-based method to mitigate this decentralized training difficulty.
arXiv Detail & Related papers (2021-02-09T11:27:14Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.