An alternative approach for distributed parameter estimation under
Gaussian settings
- URL: http://arxiv.org/abs/2204.08317v1
- Date: Thu, 14 Apr 2022 03:49:31 GMT
- Title: An alternative approach for distributed parameter estimation under
Gaussian settings
- Authors: Subhro Das
- Abstract summary: This paper takes a different approach for the distributed linear parameter estimation over a multi-agent network.
The sensor measurements at each agent are linear and corrupted with additive white Gaussian noise.
Under such settings, this paper presents a novel distributed estimation algorithm that fuses the concepts of consensus and innovations.
- Score: 6.624726878647541
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper takes a different approach for the distributed linear parameter
estimation over a multi-agent network. The parameter vector is considered to be
stochastic with a Gaussian distribution. The sensor measurements at each agent
are linear and corrupted with additive white Gaussian noise. Under such
settings, this paper presents a novel distributed estimation algorithm that
fuses the the concepts of consensus and innovations by incorporating the
consensus terms (of neighboring estimates) into the innovation terms. Under the
assumption of distributed parameter observability, introduced in this paper, we
design the optimal gain matrices such that the distributed estimates are
consistent and achieves fast convergence.
Related papers
- Variance-Reducing Couplings for Random Features [57.73648780299374]
Random features (RFs) are a popular technique to scale up kernel methods in machine learning.
We find couplings to improve RFs defined on both Euclidean and discrete input spaces.
We reach surprising conclusions about the benefits and limitations of variance reduction as a paradigm.
arXiv Detail & Related papers (2024-05-26T12:25:09Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Distributed Bayesian Estimation in Sensor Networks: Consensus on
Marginal Densities [15.038649101409804]
We derive a distributed provably-correct algorithm in the functional space of probability distributions over continuous variables.
We leverage these results to obtain new distributed estimators restricted to subsets of variables observed by individual agents.
This relates to applications such as cooperative localization and federated learning, where the data collected at any agent depends on a subset of all variables of interest.
arXiv Detail & Related papers (2023-12-02T21:10:06Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Wrapped Distributions on homogeneous Riemannian manifolds [58.720142291102135]
Control over distributions' properties, such as parameters, symmetry and modality yield a family of flexible distributions.
We empirically validate our approach by utilizing our proposed distributions within a variational autoencoder and a latent space network model.
arXiv Detail & Related papers (2022-04-20T21:25:21Z) - Learning Structured Gaussians to Approximate Deep Ensembles [10.055143995729415]
This paper proposes using a sparse-structured multivariate Gaussian to provide a closed-form approxorimator for dense image prediction tasks.
We capture the uncertainty and structured correlations in the predictions explicitly in a formal distribution, rather than implicitly through sampling alone.
We demonstrate the merits of our approach on monocular depth estimation and show that the advantages of our approach are obtained with comparable quantitative performance.
arXiv Detail & Related papers (2022-03-29T12:34:43Z) - On observability and optimal gain design for distributed linear
filtering and prediction [6.624726878647541]
This paper presents a new approach to distributed linear filtering and prediction.
Inspired by the consensus+innovations type of distributed estimation approaches, this paper proposes a novel algorithm that fuses the concepts of consensus and innovations.
arXiv Detail & Related papers (2022-03-07T17:11:42Z) - A Stochastic Newton Algorithm for Distributed Convex Optimization [62.20732134991661]
We analyze a Newton algorithm for homogeneous distributed convex optimization, where each machine can calculate gradients of the same population objective.
We show that our method can reduce the number, and frequency, of required communication rounds compared to existing methods without hurting performance.
arXiv Detail & Related papers (2021-10-07T17:51:10Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Distributed Variational Bayesian Algorithms Over Sensor Networks [6.572330981878818]
We propose two novel distributed VB algorithms for general Bayesian inference problem.
The proposed algorithms have excellent performance, which are almost as good as the corresponding centralized VB algorithm relying on all data available in a fusion center.
arXiv Detail & Related papers (2020-11-27T08:12:18Z) - The k-tied Normal Distribution: A Compact Parameterization of Gaussian
Mean Field Posteriors in Bayesian Neural Networks [46.677567663908185]
Variational Bayesian Inference is a popular methodology for approxing posteriorimating over Bayesian neural network weights.
Recent work has explored ever richer parameterizations of the approximate posterior in the hope of improving performance.
We find that by decomposing these variational parameters into a low-rank factorization, we can make our variational approximation more compact without decreasing the models' performance.
arXiv Detail & Related papers (2020-02-07T07:33:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.