Annealing Flow Generative Model Towards Sampling High-Dimensional and Multi-Modal Distributions
- URL: http://arxiv.org/abs/2409.20547v2
- Date: Mon, 25 Nov 2024 23:05:27 GMT
- Title: Annealing Flow Generative Model Towards Sampling High-Dimensional and Multi-Modal Distributions
- Authors: Dongze Wu, Yao Xie,
- Abstract summary: Annealing Flow is a continuous normalizing flow based approach designed to sample from high dimensional and multimodal distributions.
AF ensures effective and balanced mode exploration, achieves linear complexity in sample size and dimensions, and circumvents inefficient mixing times.
- Score: 6.992239210938067
- License:
- Abstract: Sampling from high dimensional, multimodal distributions remains a fundamental challenge across domains such as statistical Bayesian inference and physics based machine learning. In this paper, we propose Annealing Flow, a continuous normalizing flow based approach designed to sample from high dimensional and multimodal distributions. The key idea is to learn a continuous normalizing flow based transport map, guided by annealing, to transition samples from an easy to sample distribution to the target distribution, facilitating effective exploration of modes in high dimensional spaces. Unlike many existing methods, AF training does not rely on samples from the target distribution. AF ensures effective and balanced mode exploration, achieves linear complexity in sample size and dimensions, and circumvents inefficient mixing times. We demonstrate the superior performance of AF compared to state of the art methods through extensive experiments on various challenging distributions and real world datasets, particularly in high-dimensional and multimodal settings. We also highlight the potential of AF for sampling the least favorable distributions.
Related papers
- Learned Reference-based Diffusion Sampling for multi-modal distributions [2.1383136715042417]
We introduce Learned Reference-based Diffusion Sampler (LRDS), a methodology specifically designed to leverage prior knowledge on the location of the target modes.
LRDS proceeds in two steps by learning a reference diffusion model on samples located in high-density space regions.
We experimentally demonstrate that LRDS best exploits prior knowledge on the target distribution compared to competing algorithms on a variety of challenging distributions.
arXiv Detail & Related papers (2024-10-25T10:23:34Z) - Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Adaptive teachers for amortized samplers [76.88721198565861]
Amortized inference is the task of training a parametric model, such as a neural network, to approximate a distribution with a given unnormalized density where exact sampling is intractable.
Off-policy RL training facilitates the discovery of diverse, high-reward candidates, but existing methods still face challenges in efficient exploration.
We propose an adaptive training distribution (the Teacher) to guide the training of the primary amortized sampler (the Student) by prioritizing high-loss regions.
arXiv Detail & Related papers (2024-10-02T11:33:13Z) - Deep Generative Sampling in the Dual Divergence Space: A Data-efficient & Interpretative Approach for Generative AI [29.13807697733638]
We build on the remarkable achievements in generative sampling of natural images.
We propose an innovative challenge, potentially overly ambitious, which involves generating samples that resemble images.
The statistical challenge lies in the small sample size, sometimes consisting of a few hundred subjects.
arXiv Detail & Related papers (2024-04-10T22:35:06Z) - Boosting Diffusion Models with Moving Average Sampling in Frequency Domain [101.43824674873508]
Diffusion models rely on the current sample to denoise the next one, possibly resulting in denoising instability.
In this paper, we reinterpret the iterative denoising process as model optimization and leverage a moving average mechanism to ensemble all the prior samples.
We name the complete approach "Moving Average Sampling in Frequency domain (MASF)"
arXiv Detail & Related papers (2024-03-26T16:57:55Z) - Space-Time Diffusion Bridge [0.4527270266697462]
We introduce a novel method for generating new synthetic samples independent and identically distributed from real probability distributions.
We use space-time mixing strategies that extend across temporal and spatial dimensions.
We validate the efficacy of our space-time diffusion approach with numerical experiments.
arXiv Detail & Related papers (2024-02-13T23:26:11Z) - Iterated Denoising Energy Matching for Sampling from Boltzmann Densities [109.23137009609519]
Iterated Denoising Energy Matching (iDEM)
iDEM alternates between (I) sampling regions of high model density from a diffusion-based sampler and (II) using these samples in our matching objective.
We show that the proposed approach achieves state-of-the-art performance on all metrics and trains $2-5times$ faster.
arXiv Detail & Related papers (2024-02-09T01:11:23Z) - Improved off-policy training of diffusion samplers [93.66433483772055]
We study the problem of training diffusion models to sample from a distribution with an unnormalized density or energy function.
We benchmark several diffusion-structured inference methods, including simulation-based variational approaches and off-policy methods.
Our results shed light on the relative advantages of existing algorithms while bringing into question some claims from past work.
arXiv Detail & Related papers (2024-02-07T18:51:49Z) - Semi-Implicit Denoising Diffusion Models (SIDDMs) [50.30163684539586]
Existing models such as Denoising Diffusion Probabilistic Models (DDPM) deliver high-quality, diverse samples but are slowed by an inherently high number of iterative steps.
We introduce a novel approach that tackles the problem by matching implicit and explicit factors.
We demonstrate that our proposed method obtains comparable generative performance to diffusion-based models and vastly superior results to models with a small number of sampling steps.
arXiv Detail & Related papers (2023-06-21T18:49:22Z) - Efficient Multimodal Sampling via Tempered Distribution Flow [11.36635610546803]
We develop a new type of transport-based sampling method called TemperFlow.
Various experiments demonstrate the superior performance of this novel sampler compared to traditional methods.
We show its applications in modern deep learning tasks such as image generation.
arXiv Detail & Related papers (2023-04-08T06:40:06Z) - Fast Inference in Denoising Diffusion Models via MMD Finetuning [23.779985842891705]
We present MMD-DDM, a novel method for fast sampling of diffusion models.
Our approach is based on the idea of using the Maximum Mean Discrepancy (MMD) to finetune the learned distribution with a given budget of timesteps.
Our findings show that the proposed method is able to produce high-quality samples in a fraction of the time required by widely-used diffusion models.
arXiv Detail & Related papers (2023-01-19T09:48:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.