Distributed Bayesian Estimation in Sensor Networks: Consensus on
Marginal Densities
- URL: http://arxiv.org/abs/2312.01227v2
- Date: Thu, 7 Dec 2023 17:55:02 GMT
- Title: Distributed Bayesian Estimation in Sensor Networks: Consensus on
Marginal Densities
- Authors: Parth Paritosh, Nikolay Atanasov and Sonia Martinez
- Abstract summary: We derive a distributed provably-correct algorithm in the functional space of probability distributions over continuous variables.
We leverage these results to obtain new distributed estimators restricted to subsets of variables observed by individual agents.
This relates to applications such as cooperative localization and federated learning, where the data collected at any agent depends on a subset of all variables of interest.
- Score: 15.038649101409804
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we aim to design and analyze distributed Bayesian estimation
algorithms for sensor networks. The challenges we address are to (i) derive a
distributed provably-correct algorithm in the functional space of probability
distributions over continuous variables, and (ii) leverage these results to
obtain new distributed estimators restricted to subsets of variables observed
by individual agents. This relates to applications such as cooperative
localization and federated learning, where the data collected at any agent
depends on a subset of all variables of interest. We present Bayesian density
estimation algorithms using data from non-linear likelihoods at agents in
centralized, distributed, and marginal distributed settings. After setting up a
distributed estimation objective, we prove almost-sure convergence to the
optimal set of pdfs at each agent. Then, we prove the same for a storage-aware
algorithm estimating densities only over relevant variables at each agent.
Finally, we present a Gaussian version of these algorithms and implement it in
a mapping problem using variational inference to handle non-linear likelihood
models associated with LiDAR sensing.
Related papers
- Collaborative Heterogeneous Causal Inference Beyond Meta-analysis [68.4474531911361]
We propose a collaborative inverse propensity score estimator for causal inference with heterogeneous data.
Our method shows significant improvements over the methods based on meta-analysis when heterogeneity increases.
arXiv Detail & Related papers (2024-04-24T09:04:36Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Distributed Variational Inference for Online Supervised Learning [15.038649101409804]
This paper develops a scalable distributed probabilistic inference algorithm.
It applies to continuous variables, intractable posteriors and large-scale real-time data in sensor networks.
arXiv Detail & Related papers (2023-09-05T22:33:02Z) - Distributional Gaussian Processes Layers for Out-of-Distribution
Detection [18.05109901753853]
It is unsure whether out-of-distribution detection models reliant on deep neural networks are suitable for detecting domain shifts in medical imaging.
We propose a parameter efficient Bayesian layer for hierarchical convolutional Gaussian Processes that incorporates Gaussian Processes operating in Wasserstein-2 space.
Our uncertainty estimates result in out-of-distribution detection that outperforms the capabilities of previous Bayesian networks.
arXiv Detail & Related papers (2022-06-27T14:49:48Z) - An alternative approach for distributed parameter estimation under
Gaussian settings [6.624726878647541]
This paper takes a different approach for the distributed linear parameter estimation over a multi-agent network.
The sensor measurements at each agent are linear and corrupted with additive white Gaussian noise.
Under such settings, this paper presents a novel distributed estimation algorithm that fuses the concepts of consensus and innovations.
arXiv Detail & Related papers (2022-04-14T03:49:31Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - Meta Learning Low Rank Covariance Factors for Energy-Based Deterministic
Uncertainty [58.144520501201995]
Bi-Lipschitz regularization of neural network layers preserve relative distances between data instances in the feature spaces of each layer.
With the use of an attentive set encoder, we propose to meta learn either diagonal or diagonal plus low-rank factors to efficiently construct task specific covariance matrices.
We also propose an inference procedure which utilizes scaled energy to achieve a final predictive distribution.
arXiv Detail & Related papers (2021-10-12T22:04:19Z) - Unsupervised tree boosting for learning probability distributions [2.8444868155827634]
unsupervised tree boosting algorithm based on fitting additive tree ensembles.
Integral to the algorithm is a new notion of "residualization", i.e., subtracting a probability distribution from an observation to remove the distributional structure from the sampling distribution of the latter.
arXiv Detail & Related papers (2021-01-26T21:03:27Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z) - A Distributional Analysis of Sampling-Based Reinforcement Learning
Algorithms [67.67377846416106]
We present a distributional approach to theoretical analyses of reinforcement learning algorithms for constant step-sizes.
We show that value-based methods such as TD($lambda$) and $Q$-Learning have update rules which are contractive in the space of distributions of functions.
arXiv Detail & Related papers (2020-03-27T05:13:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.