Proposal of a Score Based Approach to Sampling Using Monte Carlo
Estimation of Score and Oracle Access to Target Density
- URL: http://arxiv.org/abs/2212.03325v1
- Date: Tue, 6 Dec 2022 20:56:39 GMT
- Title: Proposal of a Score Based Approach to Sampling Using Monte Carlo
Estimation of Score and Oracle Access to Target Density
- Authors: Curtis McDonald and Andrew Barron
- Abstract summary: Score based approaches to sampling have much success as a generative approach to produce new samples from a target density given a pool of initial samples.
We consider if we have no initial target black box model shown, but rather $0th$ and $1st.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Score based approaches to sampling have shown much success as a generative
algorithm to produce new samples from a target density given a pool of initial
samples. In this work, we consider if we have no initial samples from the
target density, but rather $0^{th}$ and $1^{st}$ order oracle access to the log
likelihood. Such problems may arise in Bayesian posterior sampling, or in
approximate minimization of non-convex functions. Using this knowledge alone,
we propose a Monte Carlo method to estimate the score empirically as a
particular expectation of a random variable. Using this estimator, we can then
run a discrete version of the backward flow SDE to produce samples from the
target density. This approach has the benefit of not relying on a pool of
initial samples from the target density, and it does not rely on a neural
network or other black box model to estimate the score.
Related papers
- A Practical Diffusion Path for Sampling [8.174664278172367]
Diffusion models are used in generative modeling to estimate score vectors guiding a Langevin process.
Previous approaches rely on Monte Carlo estimators that are either computationally heavy to implement or sample-inefficient.
We propose a computationally attractive alternative, relying on the so-called dilation path, that yields score vectors that are available in closed-form.
arXiv Detail & Related papers (2024-06-20T07:00:56Z) - Closed-Form Diffusion Models [14.20871291924173]
Score-based generative models (SGMs) sample from a target distribution by iteratively transforming noise using the score function of the target.
For any finite training set, this score function can be evaluated in closed form, but the resulting SGM memorizes its training data and does not generate novel samples.
We propose an efficient nearest-neighbor-based estimator of its score function.
arXiv Detail & Related papers (2023-10-19T00:45:05Z) - Sobolev Space Regularised Pre Density Models [51.558848491038916]
We propose a new approach to non-parametric density estimation that is based on regularizing a Sobolev norm of the density.
This method is statistically consistent, and makes the inductive validation model clear and consistent.
arXiv Detail & Related papers (2023-07-25T18:47:53Z) - Arbitrary Point Cloud Upsampling with Spherical Mixture of Gaussians [1.2375561840897737]
APU-SMOG is a Transformer-based model for Arbitrary Point cloud Upsampling (APU)
APU-SMOG outperforms state-of-the-art fixed-ratio methods.
arXiv Detail & Related papers (2022-08-10T11:10:16Z) - Convergence for score-based generative modeling with polynomial
complexity [9.953088581242845]
We prove the first convergence guarantees for the core mechanic behind Score-based generative modeling.
Compared to previous works, we do not incur error that grows exponentially in time or that suffers from a curse of dimensionality.
We show that a predictor-corrector gives better convergence than using either portion alone.
arXiv Detail & Related papers (2022-06-13T14:57:35Z) - Boost Test-Time Performance with Closed-Loop Inference [85.43516360332646]
We propose to predict hard-classified test samples in a looped manner to boost the model performance.
We first devise a filtering criterion to identify those hard-classified test samples that need additional inference loops.
For each hard sample, we construct an additional auxiliary learning task based on its original top-$K$ predictions to calibrate the model.
arXiv Detail & Related papers (2022-03-21T10:20:21Z) - Unrolling Particles: Unsupervised Learning of Sampling Distributions [102.72972137287728]
Particle filtering is used to compute good nonlinear estimates of complex systems.
We show in simulations that the resulting particle filter yields good estimates in a wide range of scenarios.
arXiv Detail & Related papers (2021-10-06T16:58:34Z) - Sample Efficient Model Evaluation [30.72511219329606]
Given a collection of unlabelled data points, we address how to select which subset to label to best estimate test metrics.
We consider two sampling based approaches, namely the well-known Importance Sampling and we introduce a novel application of Poisson Sampling.
arXiv Detail & Related papers (2021-09-24T16:03:58Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Learning a Unified Sample Weighting Network for Object Detection [113.98404690619982]
Region sampling or weighting is significantly important to the success of modern region-based object detectors.
We argue that sample weighting should be data-dependent and task-dependent.
We propose a unified sample weighting network to predict a sample's task weights.
arXiv Detail & Related papers (2020-06-11T16:19:16Z) - Distributionally Robust Bayesian Quadrature Optimization [60.383252534861136]
We study BQO under distributional uncertainty in which the underlying probability distribution is unknown except for a limited set of its i.i.d. samples.
A standard BQO approach maximizes the Monte Carlo estimate of the true expected objective given the fixed sample set.
We propose a novel posterior sampling based algorithm, namely distributionally robust BQO (DRBQO) for this purpose.
arXiv Detail & Related papers (2020-01-19T12:00:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.