Low Stein Discrepancy via Message-Passing Monte Carlo
- URL: http://arxiv.org/abs/2503.21103v1
- Date: Thu, 27 Mar 2025 02:49:31 GMT
- Title: Low Stein Discrepancy via Message-Passing Monte Carlo
- Authors: Nathan Kirk, T. Konstantin Rusch, Jakob Zech, Daniela Rus,
- Abstract summary: Message-Passing Monte Carlo (MPMC) was recently introduced as a novel low-discrepancy sampling approach leveraging tools from geometric deep learning.<n>We extend this framework to sample from general multivariate probability distributions with known probability density function.<n>Our proposed method, Stein-Message-Passing Monte Carlo (MPMC), minimizes a kernelized Stein discrepancy, ensuring improved sample quality.
- Score: 50.81061839052459
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Message-Passing Monte Carlo (MPMC) was recently introduced as a novel low-discrepancy sampling approach leveraging tools from geometric deep learning. While originally designed for generating uniform point sets, we extend this framework to sample from general multivariate probability distributions with known probability density function. Our proposed method, Stein-Message-Passing Monte Carlo (Stein-MPMC), minimizes a kernelized Stein discrepancy, ensuring improved sample quality. Finally, we show that Stein-MPMC outperforms competing methods, such as Stein Variational Gradient Descent and (greedy) Stein Points, by achieving a lower Stein discrepancy.
Related papers
- Kernel Stein Discrepancy thinning: a theoretical perspective of
pathologies and a practical fix with regularization [0.0]
Stein thinning is a promising algorithm proposed by (Riabiz et al., 2022) for post-processing outputs of Markov chain Monte Carlo.
In this article, we conduct a theoretical analysis of these pathologies, to clearly identify the mechanisms at stake, and suggest improved strategies.
We then introduce the regularized Stein thinning algorithm to alleviate the identified pathologies.
arXiv Detail & Related papers (2023-01-31T10:23:56Z) - Nearly Optimal Latent State Decoding in Block MDPs [74.51224067640717]
In episodic Block MDPs, the decision maker has access to rich observations or contexts generated from a small number of latent states.
We are first interested in estimating the latent state decoding function based on data generated under a fixed behavior policy.
We then study the problem of learning near-optimal policies in the reward-free framework.
arXiv Detail & Related papers (2022-08-17T18:49:53Z) - Jo-SRC: A Contrastive Approach for Combating Noisy Labels [58.867237220886885]
We propose a noise-robust approach named Jo-SRC (Joint Sample Selection and Model Regularization based on Consistency)
Specifically, we train the network in a contrastive learning manner. Predictions from two different views of each sample are used to estimate its "likelihood" of being clean or out-of-distribution.
arXiv Detail & Related papers (2021-03-24T07:26:07Z) - Stein Variational Model Predictive Control [130.60527864489168]
Decision making under uncertainty is critical to real-world, autonomous systems.
Model Predictive Control (MPC) methods have demonstrated favorable performance in practice, but remain limited when dealing with complex distributions.
We show that this framework leads to successful planning in challenging, non optimal control problems.
arXiv Detail & Related papers (2020-11-15T22:36:59Z) - Stochastic Stein Discrepancies [29.834557590747572]
computation of a Stein discrepancy can be prohibitive if the Stein operator is expensive to evaluate.
We show that Stein discrepancies (SSDs) based on subsampled approximations of the Stein operator inherit the convergence control properties of standard SDs with probability 1.
arXiv Detail & Related papers (2020-07-06T16:15:33Z) - Sliced Kernelized Stein Discrepancy [17.159499204595527]
Kernelized Stein discrepancy (KSD) is extensively used in goodness-of-fit tests and model learning.
We propose the sliced Stein discrepancy and its scalable and kernelized variants, which employ kernel-based test functions defined on the optimal one-dimensional projections.
For model learning, we show its advantages over existing Stein discrepancy baselines by training independent component analysis models with different discrepancies.
arXiv Detail & Related papers (2020-06-30T04:58:55Z) - Stochastic Saddle-Point Optimization for Wasserstein Barycenters [69.68068088508505]
We consider the populationimation barycenter problem for random probability measures supported on a finite set of points and generated by an online stream of data.
We employ the structure of the problem and obtain a convex-concave saddle-point reformulation of this problem.
In the setting when the distribution of random probability measures is discrete, we propose an optimization algorithm and estimate its complexity.
arXiv Detail & Related papers (2020-06-11T19:40:38Z) - A diffusion approach to Stein's method on Riemannian manifolds [65.36007959755302]
We exploit the relationship between the generator of a diffusion on $mathbf M$ with target invariant measure and its characterising Stein operator.
We derive Stein factors, which bound the solution to the Stein equation and its derivatives.
We imply that the bounds for $mathbb Rm$ remain valid when $mathbf M$ is a flat manifold.
arXiv Detail & Related papers (2020-03-25T17:03:58Z) - The reproducing Stein kernel approach for post-hoc corrected sampling [11.967340182951464]
We prove that Stein importance sampling yields consistent estimators for quantities related to a target distribution of interest.
A universal theory of reproducing Stein kernels is established, which enables the construction of kernelized Stein discrepancy on general Polish spaces.
arXiv Detail & Related papers (2020-01-25T05:33:05Z) - Stein's Lemma for the Reparameterization Trick with Exponential Family Mixtures [23.941042092067338]
Stein's lemma plays an essential role in Stein's method.<n>We extend Stein's lemma to exponential-family mixture distributions.
arXiv Detail & Related papers (2019-10-29T16:59:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.