Adaptive Noise Covariance Estimation under Colored Noise using Dynamic
Expectation Maximization
- URL: http://arxiv.org/abs/2308.07797v1
- Date: Tue, 15 Aug 2023 14:21:53 GMT
- Title: Adaptive Noise Covariance Estimation under Colored Noise using Dynamic
Expectation Maximization
- Authors: Ajith Anil Meera and Pablo Lanillos
- Abstract summary: We introduce a novel brain-inspired algorithm that accurately estimates the NCM for dynamic systems subjected to colored noise.
We mathematically prove that our NCM estimator converges to the global optimum of this free energy objective.
Notably, we show that our method outperforms the best baseline (Variational Bayes) in joint noise and state estimation for high colored noise.
- Score: 1.550120821358415
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The accurate estimation of the noise covariance matrix (NCM) in a dynamic
system is critical for state estimation and control, as it has a major
influence in their optimality. Although a large number of NCM estimation
methods have been developed, most of them assume the noises to be white.
However, in many real-world applications, the noises are colored (e.g., they
exhibit temporal autocorrelations), resulting in suboptimal solutions. Here, we
introduce a novel brain-inspired algorithm that accurately and adaptively
estimates the NCM for dynamic systems subjected to colored noise. Particularly,
we extend the Dynamic Expectation Maximization algorithm to perform both online
noise covariance and state estimation by optimizing the free energy objective.
We mathematically prove that our NCM estimator converges to the global optimum
of this free energy objective. Using randomized numerical simulations, we show
that our estimator outperforms nine baseline methods with minimal noise
covariance estimation error under colored noise conditions. Notably, we show
that our method outperforms the best baseline (Variational Bayes) in joint
noise and state estimation for high colored noise. We foresee that the accuracy
and the adaptive nature of our estimator make it suitable for online estimation
in real-world applications.
Related papers
- An Adaptive Re-evaluation Method for Evolution Strategy under Additive Noise [3.92625489118339]
We propose a novel method to adaptively choose the optimal re-evaluation number for function values corrupted by additive Gaussian white noise.
We experimentally compare our method to the state-of-the-art noise-handling methods for CMA-ES on a set of artificial test functions.
arXiv Detail & Related papers (2024-09-25T09:10:21Z) - Bayesian Inference of General Noise Model Parameters from Surface Code's Syndrome Statistics [0.0]
We propose general noise model Bayesian inference methods that integrate the surface code's tensor network simulator.
For stationary noise, where the noise parameters are constant and do not change, we propose a method based on the Markov chain Monte Carlo.
For time-varying noise, which is a more realistic situation, we introduce another method based on the sequential Monte Carlo.
arXiv Detail & Related papers (2024-06-13T10:26:04Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Optimizing the Noise in Self-Supervised Learning: from Importance
Sampling to Noise-Contrastive Estimation [80.07065346699005]
It is widely assumed that the optimal noise distribution should be made equal to the data distribution, as in Generative Adversarial Networks (GANs)
We turn to Noise-Contrastive Estimation which grounds this self-supervised task as an estimation problem of an energy-based model of the data.
We soberly conclude that the optimal noise may be hard to sample from, and the gain in efficiency can be modest compared to choosing the noise distribution equal to the data's.
arXiv Detail & Related papers (2023-01-23T19:57:58Z) - Tradeoffs between convergence rate and noise amplification for momentum-based accelerated optimization algorithms [8.669461942767098]
We study momentum-based first-order optimization algorithms in which the iterations are subject to an additive white noise.
For strongly convex quadratic problems, we use the steady-state variance of the error in the optimization variable to quantify noise amplification.
We introduce two parameterized families of algorithms that strike a balance between noise amplification and settling time.
arXiv Detail & Related papers (2022-09-24T04:26:30Z) - Greedy versus Map-based Optimized Adaptive Algorithms for
random-telegraph-noise mitigation by spectator qubits [6.305016513788048]
In a scenario where data-storage qubits are kept in isolation as far as possible, noise mitigation can still be done using additional noise probes.
We construct a theoretical model assuming projective measurements on the qubits, and derive the performance of different measurement and control strategies.
We show, analytically and numerically, that MOAAAR outperforms the Greedy algorithm, especially in the regime of high noise sensitivity of SQ.
arXiv Detail & Related papers (2022-05-25T08:25:10Z) - The Optimal Noise in Noise-Contrastive Learning Is Not What You Think [80.07065346699005]
We show that deviating from this assumption can actually lead to better statistical estimators.
In particular, the optimal noise distribution is different from the data's and even from a different family.
arXiv Detail & Related papers (2022-03-02T13:59:20Z) - Optimizing Information-theoretical Generalization Bounds via Anisotropic
Noise in SGLD [73.55632827932101]
We optimize the information-theoretical generalization bound by manipulating the noise structure in SGLD.
We prove that with constraint to guarantee low empirical risk, the optimal noise covariance is the square root of the expected gradient covariance.
arXiv Detail & Related papers (2021-10-26T15:02:27Z) - Robust Value Iteration for Continuous Control Tasks [99.00362538261972]
When transferring a control policy from simulation to a physical system, the policy needs to be robust to variations in the dynamics to perform well.
We present Robust Fitted Value Iteration, which uses dynamic programming to compute the optimal value function on the compact state domain.
We show that robust value is more robust compared to deep reinforcement learning algorithm and the non-robust version of the algorithm.
arXiv Detail & Related papers (2021-05-25T19:48:35Z) - Adaptive Multi-View ICA: Estimation of noise levels for optimal
inference [65.94843987207445]
Adaptive multiView ICA (AVICA) is a noisy ICA model where each view is a linear mixture of shared independent sources with additive noise on the sources.
On synthetic data, AVICA yields better sources estimates than other group ICA methods thanks to its explicit MMSE estimator.
On real magnetoencephalograpy (MEG) data, we provide evidence that the decomposition is less sensitive to sampling noise and that the noise variance estimates are biologically plausible.
arXiv Detail & Related papers (2021-02-22T13:10:12Z) - Active Learning for Identification of Linear Dynamical Systems [12.056495277232118]
We show a finite time bound estimation rate our algorithm attains.
We analyze several examples where our algorithm provably improves over rates obtained by playing noise.
arXiv Detail & Related papers (2020-02-02T21:30:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.