Nonlinear Bayesian Update via Ensemble Kernel Regression with Clustering and Subsampling
- URL: http://arxiv.org/abs/2503.15160v1
- Date: Wed, 19 Mar 2025 12:35:28 GMT
- Title: Nonlinear Bayesian Update via Ensemble Kernel Regression with Clustering and Subsampling
- Authors: Yoonsang Lee,
- Abstract summary: We propose to extend traditional ensemble Kalman filtering to settings characterized by non-Gaussian priors and nonlinear measurement operators.<n>In this framework, the observed component is first denoised via a standard Kalman update, while the unobserved component is estimated using a nonlinear regression approach.
- Score: 0.87024326813104
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nonlinear Bayesian update for a prior ensemble is proposed to extend traditional ensemble Kalman filtering to settings characterized by non-Gaussian priors and nonlinear measurement operators. In this framework, the observed component is first denoised via a standard Kalman update, while the unobserved component is estimated using a nonlinear regression approach based on kernel density estimation. The method incorporates a subsampling strategy to ensure stability and, when necessary, employs unsupervised clustering to refine the conditional estimate. Numerical experiments on Lorenz systems and a PDE-constrained inverse problem illustrate that the proposed nonlinear update can reduce estimation errors compared to standard linear updates, especially in highly nonlinear scenarios.
Related papers
- Ensemble Kalman-Bucy filtering for nonlinear model predictive control [0.32634122554914]
We extend the popular ensemble Kalman filter to receding horizon optimal control problems in the spirit of nonlinear model predictive control.<n>We provide an interacting particle approximation to the forward-backward differential equations arising from Pontryagin's maximum principle.<n>The receding horizon control laws are approximated as linear and are continuously updated as in nonlinear model predictive control.
arXiv Detail & Related papers (2025-03-16T12:04:28Z) - Relational Conformal Prediction for Correlated Time Series [56.59852921638328]
We propose a novel distribution-free approach based on conformal prediction framework and quantile regression.<n>We fill this void by introducing a novel conformal prediction method based on graph deep learning operators.<n>Our approach provides accurate coverage and archives state-of-the-art uncertainty quantification in relevant benchmarks.
arXiv Detail & Related papers (2025-02-13T16:12:17Z) - Nonlinear Assimilation via Score-based Sequential Langevin Sampling [5.107329143106734]
This paper presents score-based sequential Langevin sampling (SSLS)<n>The proposed method decomposes the assimilation process into alternating prediction and update steps.<n>We provide theoretical guarantees for SSLS convergence in total variation (TV) distance under certain conditions.
arXiv Detail & Related papers (2024-11-20T16:31:46Z) - Bayesian Inference for Consistent Predictions in Overparameterized Nonlinear Regression [0.0]
This study explores the predictive properties of over parameterized nonlinear regression within the Bayesian framework.
Posterior contraction is established for generalized linear and single-neuron models with Lipschitz continuous activation functions.
The proposed method was validated via numerical simulations and a real data application.
arXiv Detail & Related papers (2024-04-06T04:22:48Z) - Estimation Sample Complexity of a Class of Nonlinear Continuous-time Systems [0.0]
We present a method of parameter estimation for large class of nonlinear systems, namely those in which the state consists of output derivatives and the flow is linear in the parameter.
The method, which solves for the unknown parameter by directly inverting the dynamics using regularized linear regression, is based on new design and analysis ideas for differentiation filtering and regularized least squares.
arXiv Detail & Related papers (2023-12-08T21:42:11Z) - Dynamic selection of p-norm in linear adaptive filtering via online
kernel-based reinforcement learning [8.319127681936815]
This study addresses the problem of selecting dynamically, at each time instance, the optimal'' p-norm to combat outliers in linear adaptive filtering.
Online and data-driven framework is designed via kernel-based reinforcement learning (KBRL)
arXiv Detail & Related papers (2022-10-20T14:49:39Z) - Benign overfitting and adaptive nonparametric regression [71.70323672531606]
We construct an estimator which is a continuous function interpolating the data points with high probability.
We attain minimax optimal rates under mean squared risk on the scale of H"older classes adaptively to the unknown smoothness.
arXiv Detail & Related papers (2022-06-27T14:50:14Z) - A Priori Denoising Strategies for Sparse Identification of Nonlinear
Dynamical Systems: A Comparative Study [68.8204255655161]
We investigate and compare the performance of several local and global smoothing techniques to a priori denoise the state measurements.
We show that, in general, global methods, which use the entire measurement data set, outperform local methods, which employ a neighboring data subset around a local point.
arXiv Detail & Related papers (2022-01-29T23:31:25Z) - Time varying regression with hidden linear dynamics [74.9914602730208]
We revisit a model for time-varying linear regression that assumes the unknown parameters evolve according to a linear dynamical system.
Counterintuitively, we show that when the underlying dynamics are stable the parameters of this model can be estimated from data by combining just two ordinary least squares estimates.
arXiv Detail & Related papers (2021-12-29T23:37:06Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - GradientDICE: Rethinking Generalized Offline Estimation of Stationary
Values [75.17074235764757]
We present GradientDICE for estimating the density ratio between the state distribution of the target policy and the sampling distribution.
GenDICE is the state-of-the-art for estimating such density ratios.
arXiv Detail & Related papers (2020-01-29T22:10:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.