Online Multi-Agent Decentralized Byzantine-robust Gradient Estimation
- URL: http://arxiv.org/abs/2209.15274v1
- Date: Fri, 30 Sep 2022 07:29:49 GMT
- Title: Online Multi-Agent Decentralized Byzantine-robust Gradient Estimation
- Authors: Alexandre Reiffers-Masson (IMT Atlantique - INFO, Lab-STICC_MATHNET),
Isabel Amigo (IMT Atlantique - INFO, Lab-STICC_MATHNET)
- Abstract summary: Our algorithm is based on simultaneous perturbation, secure state estimation and two-timescale approximations.
We also show the performance of our algorithm through numerical experiments.
- Score: 62.997667081978825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose an iterative scheme for distributed
Byzantineresilient estimation of a gradient associated with a black-box model.
Our algorithm is based on simultaneous perturbation, secure state estimation
and two-timescale stochastic approximations. We also show the performance of
our algorithm through numerical experiments.
Related papers
- Differentiating Metropolis-Hastings to Optimize Intractable Densities [51.16801956665228]
We develop an algorithm for automatic differentiation of Metropolis-Hastings samplers.
We apply gradient-based optimization to objectives expressed as expectations over intractable target densities.
arXiv Detail & Related papers (2023-06-13T17:56:02Z) - Distributed Bayesian Learning of Dynamic States [65.7870637855531]
The proposed algorithm is a distributed Bayesian filtering task for finite-state hidden Markov models.
It can be used for sequential state estimation, as well as for modeling opinion formation over social networks under dynamic environments.
arXiv Detail & Related papers (2022-12-05T19:40:17Z) - Score-based Continuous-time Discrete Diffusion Models [102.65769839899315]
We extend diffusion models to discrete variables by introducing a Markov jump process where the reverse process denoises via a continuous-time Markov chain.
We show that an unbiased estimator can be obtained via simple matching the conditional marginal distributions.
We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
arXiv Detail & Related papers (2022-11-30T05:33:29Z) - Fast Estimation of Bayesian State Space Models Using Amortized
Simulation-Based Inference [0.0]
This paper presents a fast algorithm for estimating hidden states of Bayesian state space models.
After pretraining, finding the posterior distribution for any dataset takes from hundredths to tenths of a second.
arXiv Detail & Related papers (2022-10-13T16:37:05Z) - Exponential Concentration in Stochastic Approximation [0.8192907805418583]
We analyze the behavior of approximation algorithms where iterates, in expectation, progress towards an objective at each step.
We apply our results to several different Markov Approximation algorithms, specifically Projected Gradient Descent, Kiefer-Wolfowitz and Frank-Wolfe algorithms.
arXiv Detail & Related papers (2022-08-15T14:57:26Z) - Two-Timescale Stochastic Approximation for Bilevel Optimisation Problems
in Continuous-Time Models [0.0]
We analyse the properties of a continuous-time, two-timescale approximation algorithm designed for bilevel optimisation problems in continuous-time models.
We obtain the weak convergence rate of this algorithm in the form of a central limit theorem.
arXiv Detail & Related papers (2022-06-14T17:12:28Z) - Instantaneous Frequency Estimation In Multi-Component Signals Using
Stochastic EM Algorithm [12.887899139468177]
This paper addresses the problem of estimating the modes of an observed non-stationary mixture signal in the presence of an arbitrary distributed noise.
A novel Bayesian model is introduced to estimate the model parameters from the spectrogram of the observed signal, by resorting to the version of the EM algorithm to avoid the computationally expensive parameters from the posterior distribution.
arXiv Detail & Related papers (2022-03-28T17:06:11Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - Path Sample-Analytic Gradient Estimators for Stochastic Binary Networks [78.76880041670904]
In neural networks with binary activations and or binary weights the training by gradient descent is complicated.
We propose a new method for this estimation problem combining sampling and analytic approximation steps.
We experimentally show higher accuracy in gradient estimation and demonstrate a more stable and better performing training in deep convolutional models.
arXiv Detail & Related papers (2020-06-04T21:51:21Z) - Average-case Acceleration Through Spectral Density Estimation [35.01931431231649]
We develop a framework for the average-case analysis of random quadratic problems.
We derive algorithms that are optimal under this analysis.
We develop explicit algorithms for the uniform, Marchenko-Pastur, and exponential distributions.
arXiv Detail & Related papers (2020-02-12T01:44:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.