Variational autoencoder with weighted samples for high-dimensional
non-parametric adaptive importance sampling
- URL: http://arxiv.org/abs/2310.09194v1
- Date: Fri, 13 Oct 2023 15:40:55 GMT
- Title: Variational autoencoder with weighted samples for high-dimensional
non-parametric adaptive importance sampling
- Authors: Julien Demange-Chryst, Fran\c{c}ois Bachoc, J\'er\^ome Morio,
Timoth\'e Krauth
- Abstract summary: We extend the existing framework to the case of weighted samples by introducing a new objective function.
In order to add flexibility to the model and to be able to learn multimodal distributions, we consider a learnable prior distribution.
We exploit the proposed procedure in existing adaptive importance sampling algorithms to draw points from a target distribution and to estimate a rare event probability in high dimension.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Probability density function estimation with weighted samples is the main
foundation of all adaptive importance sampling algorithms. Classically, a
target distribution is approximated either by a non-parametric model or within
a parametric family. However, these models suffer from the curse of
dimensionality or from their lack of flexibility. In this contribution, we
suggest to use as the approximating model a distribution parameterised by a
variational autoencoder. We extend the existing framework to the case of
weighted samples by introducing a new objective function. The flexibility of
the obtained family of distributions makes it as expressive as a non-parametric
model, and despite the very high number of parameters to estimate, this family
is much more efficient in high dimension than the classical Gaussian or
Gaussian mixture families. Moreover, in order to add flexibility to the model
and to be able to learn multimodal distributions, we consider a learnable prior
distribution for the variational autoencoder latent variables. We also
introduce a new pre-training procedure for the variational autoencoder to find
good starting weights of the neural networks to prevent as much as possible the
posterior collapse phenomenon to happen. At last, we explicit how the resulting
distribution can be combined with importance sampling, and we exploit the
proposed procedure in existing adaptive importance sampling algorithms to draw
points from a target distribution and to estimate a rare event probability in
high dimension on two multimodal problems.
Related papers
- Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Variational Bayesian surrogate modelling with application to robust design optimisation [0.9626666671366836]
Surrogate models provide a quick-to-evaluate approximation to complex computational models.
We consider Bayesian inference for constructing statistical surrogates with input uncertainties and dimensionality reduction.
We demonstrate intrinsic and robust structural optimisation problems where cost functions depend on a weighted sum of the mean and standard deviation of model outputs.
arXiv Detail & Related papers (2024-04-23T09:22:35Z) - Differentiating Metropolis-Hastings to Optimize Intractable Densities [51.16801956665228]
We develop an algorithm for automatic differentiation of Metropolis-Hastings samplers.
We apply gradient-based optimization to objectives expressed as expectations over intractable target densities.
arXiv Detail & Related papers (2023-06-13T17:56:02Z) - Variational Classification [51.2541371924591]
We derive a variational objective to train the model, analogous to the evidence lower bound (ELBO) used to train variational auto-encoders.
Treating inputs to the softmax layer as samples of a latent variable, our abstracted perspective reveals a potential inconsistency.
We induce a chosen latent distribution, instead of the implicit assumption found in a standard softmax layer.
arXiv Detail & Related papers (2023-05-17T17:47:19Z) - Estimating Regression Predictive Distributions with Sample Networks [17.935136717050543]
A common approach to model uncertainty is to choose a parametric distribution and fit the data to it using maximum likelihood estimation.
The chosen parametric form can be a poor fit to the data-generating distribution, resulting in unreliable uncertainty estimates.
We propose SampleNet, a flexible and scalable architecture for modeling uncertainty that avoids specifying a parametric form on the output distribution.
arXiv Detail & Related papers (2022-11-24T17:23:29Z) - Low-rank Characteristic Tensor Density Estimation Part II: Compression
and Latent Density Estimation [31.631861197477185]
Learning generative probabilistic models is a core problem in machine learning.
This paper proposes a joint dimensionality reduction and non-parametric density estimation framework.
We demonstrate that the proposed model achieves very promising results on regression tasks, sampling, and anomaly detection.
arXiv Detail & Related papers (2021-06-20T00:38:56Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Achieving Efficiency in Black Box Simulation of Distribution Tails with
Self-structuring Importance Samplers [1.6114012813668934]
The paper presents a novel Importance Sampling (IS) scheme for estimating distribution of performance measures modeled with a rich set of tools such as linear programs, integer linear programs, piecewise linear/quadratic objectives, feature maps specified with deep neural networks, etc.
arXiv Detail & Related papers (2021-02-14T03:37:22Z) - Variational Mixture of Normalizing Flows [0.0]
Deep generative models, such as generative adversarial networks autociteGAN, variational autoencoders autocitevaepaper, and their variants, have seen wide adoption for the task of modelling complex data distributions.
Normalizing flows have overcome this limitation by leveraging the change-of-suchs formula for probability density functions.
The present work overcomes this by using normalizing flows as components in a mixture model and devising an end-to-end training procedure for such a model.
arXiv Detail & Related papers (2020-09-01T17:20:08Z) - Bayesian Deep Learning and a Probabilistic Perspective of Generalization [56.69671152009899]
We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization.
We also propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction.
arXiv Detail & Related papers (2020-02-20T15:13:27Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.