Ensemble transport smoothing. Part I: Unified framework
- URL: http://arxiv.org/abs/2210.17000v2
- Date: Wed, 22 Nov 2023 15:17:40 GMT
- Title: Ensemble transport smoothing. Part I: Unified framework
- Authors: Maximilian Ramgraber, Ricardo Baptista, Dennis McLaughlin, Youssef
Marzouk
- Abstract summary: We propose a general ensemble framework for transport-based smoothing.
We detail how they exploit the structure of state-space models in fully non-Gaussian settings.
A companion paper explores the implementation of nonlinear ensemble transport smoothers in greater depth.
- Score: 0.40964539027092906
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Smoothers are algorithms for Bayesian time series re-analysis. Most
operational smoothers rely either on affine Kalman-type transformations or on
sequential importance sampling. These strategies occupy opposite ends of a
spectrum that trades computational efficiency and scalability for statistical
generality and consistency: non-Gaussianity renders affine Kalman updates
inconsistent with the true Bayesian solution, while the ensemble size required
for successful importance sampling can be prohibitive. This paper revisits the
smoothing problem from the perspective of measure transport, which offers the
prospect of consistent prior-to-posterior transformations for Bayesian
inference. We leverage this capacity by proposing a general ensemble framework
for transport-based smoothing. Within this framework, we derive a comprehensive
set of smoothing recursions based on nonlinear transport maps and detail how
they exploit the structure of state-space models in fully non-Gaussian
settings. We also describe how many standard Kalman-type smoothing algorithms
emerge as special cases of our framework. A companion paper (Ramgraber et al.,
2023) explores the implementation of nonlinear ensemble transport smoothers in
greater depth.
Related papers
- Generalized Schrödinger Bridge Matching [54.171931505066]
Generalized Schr"odinger Bridge (GSB) problem setup is prevalent in many scientific areas both within and without machine learning.
We propose Generalized Schr"odinger Bridge Matching (GSBM), a new matching algorithm inspired by recent advances.
We show that such a generalization can be cast as solving conditional optimal control, for which variational approximations can be used.
arXiv Detail & Related papers (2023-10-03T17:42:11Z) - SIGMA: Scale-Invariant Global Sparse Shape Matching [50.385414715675076]
We propose a novel mixed-integer programming (MIP) formulation for generating precise sparse correspondences for non-rigid shapes.
We show state-of-the-art results for sparse non-rigid matching on several challenging 3D datasets.
arXiv Detail & Related papers (2023-08-16T14:25:30Z) - Moreau-Yoshida Variational Transport: A General Framework For Solving Regularized Distributional Optimization Problems [3.038642416291856]
We consider a general optimization problem of minimizing a composite objective functional defined over a class probability distributions.
We propose a novel method, dubbed as Moreau-Yoshida Variational Transport (MYVT), for solving the regularized distributional optimization problem.
arXiv Detail & Related papers (2023-07-31T01:14:42Z) - Random Smoothing Regularization in Kernel Gradient Descent Learning [24.383121157277007]
We present a framework for random smoothing regularization that can adaptively learn a wide range of ground truth functions belonging to the classical Sobolev spaces.
Our estimator can adapt to the structural assumptions of the underlying data and avoid the curse of dimensionality.
arXiv Detail & Related papers (2023-05-05T13:37:34Z) - Instance-Dependent Generalization Bounds via Optimal Transport [51.71650746285469]
Existing generalization bounds fail to explain crucial factors that drive the generalization of modern neural networks.
We derive instance-dependent generalization bounds that depend on the local Lipschitz regularity of the learned prediction function in the data space.
We empirically analyze our generalization bounds for neural networks, showing that the bound values are meaningful and capture the effect of popular regularization methods during training.
arXiv Detail & Related papers (2022-11-02T16:39:42Z) - Ensemble transport smoothing. Part II: Nonlinear updates [0.40964539027092906]
We demonstrate and demonstrate nonlinear backward ensemble transport smoothers.
Our smoothers yield lower estimation error than conventional linear smoothers and state-of-the-art iterative ensemble Kalman smoothers.
arXiv Detail & Related papers (2022-10-31T16:04:14Z) - Learning Globally Smooth Functions on Manifolds [94.22412028413102]
Learning smooth functions is generally challenging, except in simple cases such as learning linear or kernel models.
This work proposes to overcome these obstacles by combining techniques from semi-infinite constrained learning and manifold regularization.
We prove that, under mild conditions, this method estimates the Lipschitz constant of the solution, learning a globally smooth solution as a byproduct.
arXiv Detail & Related papers (2022-10-01T15:45:35Z) - A Quadrature Rule combining Control Variates and Adaptive Importance
Sampling [0.0]
We show that a simple weighted least squares approach can be used to improve the accuracy of Monte Carlo integration estimates.
Our main result is a non-asymptotic bound on the probabilistic error of the procedure.
The good behavior of the method is illustrated empirically on synthetic examples and real-world data for Bayesian linear regression.
arXiv Detail & Related papers (2022-05-24T08:21:45Z) - Deep Shells: Unsupervised Shape Correspondence with Optimal Transport [52.646396621449]
We propose a novel unsupervised learning approach to 3D shape correspondence.
We show that the proposed method significantly improves over the state-of-the-art on multiple datasets.
arXiv Detail & Related papers (2020-10-28T22:24:07Z) - Support recovery and sup-norm convergence rates for sparse pivotal
estimation [79.13844065776928]
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.
We show minimax sup-norm convergence rates for non smoothed and smoothed, single task and multitask square-root Lasso-type estimators.
arXiv Detail & Related papers (2020-01-15T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.