Sharp error estimates for target measure diffusion maps with
applications to the committor problem
- URL: http://arxiv.org/abs/2312.14418v1
- Date: Fri, 22 Dec 2023 03:52:17 GMT
- Title: Sharp error estimates for target measure diffusion maps with
applications to the committor problem
- Authors: Shashank Sule, Luke Evans and Maria Cameron
- Abstract summary: We obtain sharp error estimates for the consistency error of the Target Measure Diffusion map (TMDmap) (Banisch et al. 2020)
The resulting convergence rates are consistent with the approximation theory of graph Laplacians.
We use these results to study an important application of TMDmap in the analysis of rare events in systems governed by overdamped Langevin dynamics.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We obtain asymptotically sharp error estimates for the consistency error of
the Target Measure Diffusion map (TMDmap) (Banisch et al. 2020), a variant of
diffusion maps featuring importance sampling and hence allowing input data
drawn from an arbitrary density. The derived error estimates include the bias
error and the variance error. The resulting convergence rates are consistent
with the approximation theory of graph Laplacians. The key novelty of our
results lies in the explicit quantification of all the prefactors on
leading-order terms. We also prove an error estimate for solutions of Dirichlet
BVPs obtained using TMDmap, showing that the solution error is controlled by
consistency error. We use these results to study an important application of
TMDmap in the analysis of rare events in systems governed by overdamped
Langevin dynamics using the framework of transition path theory (TPT). The
cornerstone ingredient of TPT is the solution of the committor problem, a
boundary value problem for the backward Kolmogorov PDE. Remarkably, we find
that the TMDmap algorithm is particularly suited as a meshless solver to the
committor problem due to the cancellation of several error terms in the
prefactor formula. Furthermore, significant improvements in bias and variance
errors occur when using a quasi-uniform sampling density. Our numerical
experiments show that these improvements in accuracy are realizable in practice
when using $\delta$-nets as spatially uniform inputs to the TMDmap algorithm.
Related papers
- Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Multi-Source and Test-Time Domain Adaptation on Multivariate Signals using Spatio-Temporal Monge Alignment [59.75420353684495]
Machine learning applications on signals such as computer vision or biomedical data often face challenges due to the variability that exists across hardware devices or session recordings.
In this work, we propose Spatio-Temporal Monge Alignment (STMA) to mitigate these variabilities.
We show that STMA leads to significant and consistent performance gains between datasets acquired with very different settings.
arXiv Detail & Related papers (2024-07-19T13:33:38Z) - Diffusion models for Gaussian distributions: Exact solutions and Wasserstein errors [0.0]
Diffusion or score-based models recently showed high performance in image generation.
We study theoretically the behavior of diffusion models and their numerical implementation when the data distribution is Gaussian.
arXiv Detail & Related papers (2024-05-23T07:28:56Z) - Flow-based Distributionally Robust Optimization [23.232731771848883]
We present a framework, called $textttFlowDRO$, for solving flow-based distributionally robust optimization (DRO) problems with Wasserstein uncertainty sets.
We aim to find continuous worst-case distribution (also called the Least Favorable Distribution, LFD) and sample from it.
We demonstrate its usage in adversarial learning, distributionally robust hypothesis testing, and a new mechanism for data-driven distribution perturbation differential privacy.
arXiv Detail & Related papers (2023-10-30T03:53:31Z) - Distributed Variational Inference for Online Supervised Learning [15.038649101409804]
This paper develops a scalable distributed probabilistic inference algorithm.
It applies to continuous variables, intractable posteriors and large-scale real-time data in sensor networks.
arXiv Detail & Related papers (2023-09-05T22:33:02Z) - A Targeted Accuracy Diagnostic for Variational Approximations [8.969208467611896]
Variational Inference (VI) is an attractive alternative to Markov Chain Monte Carlo (MCMC)
Existing methods characterize the quality of the whole variational distribution.
We propose the TArgeted Diagnostic for Distribution Approximation Accuracy (TADDAA)
arXiv Detail & Related papers (2023-02-24T02:50:18Z) - Combating Mode Collapse in GANs via Manifold Entropy Estimation [70.06639443446545]
Generative Adversarial Networks (GANs) have shown compelling results in various tasks and applications.
We propose a novel training pipeline to address the mode collapse issue of GANs.
arXiv Detail & Related papers (2022-08-25T12:33:31Z) - Improving Diffusion Models for Inverse Problems using Manifold Constraints [55.91148172752894]
We show that current solvers throw the sample path off the data manifold, and hence the error accumulates.
To address this, we propose an additional correction term inspired by the manifold constraint.
We show that our method is superior to the previous methods both theoretically and empirically.
arXiv Detail & Related papers (2022-06-02T09:06:10Z) - Meta Learning Low Rank Covariance Factors for Energy-Based Deterministic
Uncertainty [58.144520501201995]
Bi-Lipschitz regularization of neural network layers preserve relative distances between data instances in the feature spaces of each layer.
With the use of an attentive set encoder, we propose to meta learn either diagonal or diagonal plus low-rank factors to efficiently construct task specific covariance matrices.
We also propose an inference procedure which utilizes scaled energy to achieve a final predictive distribution.
arXiv Detail & Related papers (2021-10-12T22:04:19Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Spectral convergence of diffusion maps: improved error bounds and an
alternative normalisation [0.6091702876917281]
This paper uses new approaches to improve the error bounds in the model case where the distribution is supported on a hypertorus.
We match long-standing pointwise error bounds for both the spectral data and the norm convergence of the operator discretisation.
We also introduce an alternative normalisation for diffusion maps based on Sinkhorn weights.
arXiv Detail & Related papers (2020-06-03T04:23:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.