Partitioned Variational Inference: A Framework for Probabilistic
Federated Learning
- URL: http://arxiv.org/abs/2202.12275v2
- Date: Fri, 25 Feb 2022 12:04:46 GMT
- Title: Partitioned Variational Inference: A Framework for Probabilistic
Federated Learning
- Authors: Matthew Ashman, Thang D. Bui, Cuong V. Nguyen, Efstratios Markou,
Adrian Weller, Siddharth Swaroop and Richard E. Turner
- Abstract summary: We introduce partitioned variational inference (PVI), a framework for performing VI in the federated setting.
We develop new supporting theory for PVI, demonstrating a number of properties that make it an attractive choice for practitioners.
- Score: 45.9225420256808
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The proliferation of computing devices has brought about an opportunity to
deploy machine learning models on new problem domains using previously
inaccessible data. Traditional algorithms for training such models often
require data to be stored on a single machine with compute performed by a
single node, making them unsuitable for decentralised training on multiple
devices. This deficiency has motivated the development of federated learning
algorithms, which allow multiple data owners to train collaboratively and use a
shared model whilst keeping local data private. However, many of these
algorithms focus on obtaining point estimates of model parameters, rather than
probabilistic estimates capable of capturing model uncertainty, which is
essential in many applications. Variational inference (VI) has become the
method of choice for fitting many modern probabilistic models. In this paper we
introduce partitioned variational inference (PVI), a general framework for
performing VI in the federated setting. We develop new supporting theory for
PVI, demonstrating a number of properties that make it an attractive choice for
practitioners; use PVI to unify a wealth of fragmented, yet related literature;
and provide empirical results that showcase the effectiveness of PVI in a
variety of federated settings.
Related papers
- Scalable Vertical Federated Learning via Data Augmentation and Amortized Inference [1.912429179274357]
This paper introduces the first comprehensive framework for fitting Bayesian models in the Vertical Federated Learning setting.
We present an innovative model formulation for specific VFL scenarios where the joint likelihood factorizes into a product of client-specific likelihoods.
Our work paves the way for privacy-preserving, decentralized Bayesian inference in vertically partitioned data scenarios.
arXiv Detail & Related papers (2024-05-07T06:29:06Z) - Adaptive Test-Time Personalization for Federated Learning [51.25437606915392]
We introduce a novel setting called test-time personalized federated learning (TTPFL)
In TTPFL, clients locally adapt a global model in an unsupervised way without relying on any labeled data during test-time.
We propose a novel algorithm called ATP to adaptively learn the adaptation rates for each module in the model from distribution shifts among source domains.
arXiv Detail & Related papers (2023-10-28T20:42:47Z) - Tackling Computational Heterogeneity in FL: A Few Theoretical Insights [68.8204255655161]
We introduce and analyse a novel aggregation framework that allows for formalizing and tackling computational heterogeneous data.
Proposed aggregation algorithms are extensively analyzed from a theoretical, and an experimental prospective.
arXiv Detail & Related papers (2023-07-12T16:28:21Z) - Improving Heterogeneous Model Reuse by Density Estimation [105.97036205113258]
This paper studies multiparty learning, aiming to learn a model using the private data of different participants.
Model reuse is a promising solution for multiparty learning, assuming that a local model has been trained for each party.
arXiv Detail & Related papers (2023-05-23T09:46:54Z) - Federated Variational Inference Methods for Structured Latent Variable
Models [1.0312968200748118]
Federated learning methods enable model training across distributed data sources without data leaving their original locations.
We present a general and elegant solution based on structured variational inference, widely used in Bayesian machine learning.
We also provide a communication-efficient variant analogous to the canonical FedAvg algorithm.
arXiv Detail & Related papers (2023-02-07T08:35:04Z) - Achieving Transparency in Distributed Machine Learning with Explainable
Data Collaboration [5.994347858883343]
A parallel trend has been to train machine learning models in collaboration with other data holders without accessing their data.
This paper presents an Explainable Data Collaboration Framework based on a model-agnostic additive feature attribution algorithm.
arXiv Detail & Related papers (2022-12-06T23:53:41Z) - VertiBayes: Learning Bayesian network parameters from vertically partitioned data with missing values [2.9707233220536313]
Federated learning makes it possible to train a machine learning model on decentralized data.
We propose a novel method called VertiBayes to train Bayesian networks on vertically partitioned data.
We experimentally show our approach produces models comparable to those learnt using traditional algorithms.
arXiv Detail & Related papers (2022-10-31T11:13:35Z) - Non-IID data and Continual Learning processes in Federated Learning: A
long road ahead [58.720142291102135]
Federated Learning is a novel framework that allows multiple devices or institutions to train a machine learning model collaboratively while preserving their data private.
In this work, we formally classify data statistical heterogeneity and review the most remarkable learning strategies that are able to face it.
At the same time, we introduce approaches from other machine learning frameworks, such as Continual Learning, that also deal with data heterogeneity and could be easily adapted to the Federated Learning settings.
arXiv Detail & Related papers (2021-11-26T09:57:11Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.