An adaptive ensemble filter for heavy-tailed distributions: tuning-free
inflation and localization
- URL: http://arxiv.org/abs/2310.08741v1
- Date: Thu, 12 Oct 2023 21:56:14 GMT
- Title: An adaptive ensemble filter for heavy-tailed distributions: tuning-free
inflation and localization
- Authors: Mathieu Le Provost, Ricardo Baptista, Jeff D. Eldredge, and Youssef
Marzouk
- Abstract summary: Heavy tails is a common feature of filtering distributions that results from the nonlinear dynamical and observation processes.
We propose an algorithm to estimate the prior-to-posterior update from samples of joint forecast distribution of the states and observations.
We demonstrate the benefits of this new ensemble filter on challenging filtering problems.
- Score: 0.3749861135832072
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Heavy tails is a common feature of filtering distributions that results from
the nonlinear dynamical and observation processes as well as the uncertainty
from physical sensors. In these settings, the Kalman filter and its ensemble
version - the ensemble Kalman filter (EnKF) - that have been designed under
Gaussian assumptions result in degraded performance. t-distributions are a
parametric family of distributions whose tail-heaviness is modulated by a
degree of freedom $\nu$. Interestingly, Cauchy and Gaussian distributions
correspond to the extreme cases of a t-distribution for $\nu = 1$ and $\nu =
\infty$, respectively. Leveraging tools from measure transport (Spantini et
al., SIAM Review, 2022), we present a generalization of the EnKF whose
prior-to-posterior update leads to exact inference for t-distributions. We
demonstrate that this filter is less sensitive to outlying synthetic
observations generated by the observation model for small $\nu$. Moreover, it
recovers the Kalman filter for $\nu = \infty$. For nonlinear state-space models
with heavy-tailed noise, we propose an algorithm to estimate the
prior-to-posterior update from samples of joint forecast distribution of the
states and observations. We rely on a regularized expectation-maximization (EM)
algorithm to estimate the mean, scale matrix, and degree of freedom of
heavy-tailed \textit{t}-distributions from limited samples (Finegold and Drton,
arXiv preprint, 2014). Leveraging the conditional independence of the joint
forecast distribution, we regularize the scale matrix with an $l1$
sparsity-promoting penalization of the log-likelihood at each iteration of the
EM algorithm. By sequentially estimating the degree of freedom at each analysis
step, our filter can adapt its prior-to-posterior update to the tail-heaviness
of the data. We demonstrate the benefits of this new ensemble filter on
challenging filtering problems.
Related papers
- Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Closed-form Filtering for Non-linear Systems [83.91296397912218]
We propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency.
We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models.
Our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities.
arXiv Detail & Related papers (2024-02-15T08:51:49Z) - Unbiased Kinetic Langevin Monte Carlo with Inexact Gradients [0.8749675983608172]
We present an unbiased method for posterior means based on kinetic Langevin dynamics.
Our proposed estimator is unbiased, attains finite variance, and satisfies a central limit theorem.
Our results demonstrate that in large-scale applications, the unbiased algorithm we present can be 2-3 orders of magnitude more efficient than the gold-standard" randomized Hamiltonian Monte Carlo.
arXiv Detail & Related papers (2023-11-08T21:19:52Z) - Nonlinear Filtering with Brenier Optimal Transport Maps [4.745059103971596]
This paper is concerned with the problem of nonlinear filtering, i.e., computing the conditional distribution of the state of a dynamical system.
Conventional sequential importance resampling (SIR) particle filters suffer from fundamental limitations, in scenarios involving degenerate likelihoods or high-dimensional states.
In this paper, we explore an alternative method, which is based on estimating the Brenier optimal transport (OT) map from the current prior distribution of the state to the posterior distribution at the next time step.
arXiv Detail & Related papers (2023-10-21T01:34:30Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - Outlier-Insensitive Kalman Filtering Using NUV Priors [24.413595920205907]
In practice, observations are corrupted by outliers, severely impairing the Kalman filter (KF)s performance.
In this work, an outlier-insensitive KF is proposed, where is achieved by modeling each potential outlier as a normally distributed random variable with unknown variance (NUV)
The NUVs variances are estimated online, using both expectation-maximization (EM) and alternating robustness (AM)
arXiv Detail & Related papers (2022-10-12T11:00:13Z) - Computational Doob's h-transforms for Online Filtering of Discretely
Observed Diffusions [65.74069050283998]
We propose a computational framework to approximate Doob's $h$-transforms.
The proposed approach can be orders of magnitude more efficient than state-of-the-art particle filters.
arXiv Detail & Related papers (2022-06-07T15:03:05Z) - Unrolling Particles: Unsupervised Learning of Sampling Distributions [102.72972137287728]
Particle filtering is used to compute good nonlinear estimates of complex systems.
We show in simulations that the resulting particle filter yields good estimates in a wide range of scenarios.
arXiv Detail & Related papers (2021-10-06T16:58:34Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Machine learning-based conditional mean filter: a generalization of the
ensemble Kalman filter for nonlinear data assimilation [42.60602838972598]
We propose a machine learning-based ensemble conditional mean filter (ML-EnCMF) for tracking possibly high-dimensional non-Gaussian state models with nonlinear dynamics based on sparse observations.
The proposed filtering method is developed based on the conditional expectation and numerically implemented using machine learning (ML) techniques combined with the ensemble method.
arXiv Detail & Related papers (2021-06-15T06:40:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.