DiverseFlow: Sample-Efficient Diverse Mode Coverage in Flows
- URL: http://arxiv.org/abs/2504.07894v1
- Date: Thu, 10 Apr 2025 16:09:50 GMT
- Title: DiverseFlow: Sample-Efficient Diverse Mode Coverage in Flows
- Authors: Mashrur M. Morshed, Vishnu Boddeti,
- Abstract summary: DiverseFlow is a training-free approach to improve the diversity of flow models.<n>We demonstrate the efficacy of our method for tasks where sample-efficient diversity is desirable.
- Score: 0.6138671548064355
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many real-world applications of flow-based generative models desire a diverse set of samples that cover multiple modes of the target distribution. However, the predominant approach for obtaining diverse sets is not sample-efficient, as it involves independently obtaining many samples from the source distribution and mapping them through the flow until the desired mode coverage is achieved. As an alternative to repeated sampling, we introduce DiverseFlow: a training-free approach to improve the diversity of flow models. Our key idea is to employ a determinantal point process to induce a coupling between the samples that drives diversity under a fixed sampling budget. In essence, DiverseFlow allows exploration of more variations in a learned flow model with fewer samples. We demonstrate the efficacy of our method for tasks where sample-efficient diversity is desirable, such as text-guided image generation with polysemous words, inverse problems like large-hole inpainting, and class-conditional image synthesis.
Related papers
- Accelerated Diffusion Models via Speculative Sampling [89.43940130493233]
Speculative sampling is a popular technique for accelerating inference in Large Language Models.<n>We extend speculative sampling to diffusion models, which generate samples via continuous, vector-valued Markov chains.<n>We propose various drafting strategies, including a simple and effective approach that does not require training a draft model.
arXiv Detail & Related papers (2025-01-09T16:50:16Z) - Efficient Diversity-Preserving Diffusion Alignment via Gradient-Informed GFlowNets [65.42834731617226]
We propose a reinforcement learning method for diffusion model finetuning, dubbed Nabla-GFlowNet.
We show that our proposed method achieves fast yet diversity- and prior-preserving finetuning of Stable Diffusion, a large-scale text-conditioned image diffusion model.
arXiv Detail & Related papers (2024-12-10T18:59:58Z) - Fast Samplers for Inverse Problems in Iterative Refinement Models [19.099632445326826]
We propose a plug-and-play framework for constructing efficient samplers for inverse problems.
Our method can generate high-quality samples in as few as 5 conditional sampling steps and outperforms competing baselines requiring 20-1000 steps.
arXiv Detail & Related papers (2024-05-27T21:50:16Z) - Improved off-policy training of diffusion samplers [93.66433483772055]
We study the problem of training diffusion models to sample from a distribution with an unnormalized density or energy function.<n>We benchmark several diffusion-structured inference methods, including simulation-based variational approaches and off-policy methods.<n>Our results shed light on the relative advantages of existing algorithms while bringing into question some claims from past work.
arXiv Detail & Related papers (2024-02-07T18:51:49Z) - Touring sampling with pushforward maps [3.5897534810405403]
This paper takes a theoretical stance to review and organize many sampling approaches in the generative modeling setting.
It might prove useful to overcome some of the current challenges in sampling with diffusion models.
arXiv Detail & Related papers (2023-11-23T08:23:43Z) - Efficient Multimodal Sampling via Tempered Distribution Flow [11.36635610546803]
We develop a new type of transport-based sampling method called TemperFlow.
Various experiments demonstrate the superior performance of this novel sampler compared to traditional methods.
We show its applications in modern deep learning tasks such as image generation.
arXiv Detail & Related papers (2023-04-08T06:40:06Z) - Unite and Conquer: Plug & Play Multi-Modal Synthesis using Diffusion
Models [54.1843419649895]
We propose a solution based on denoising diffusion probabilistic models (DDPMs)
Our motivation for choosing diffusion models over other generative models comes from the flexible internal structure of diffusion models.
Our method can unite multiple diffusion models trained on multiple sub-tasks and conquer the combined task.
arXiv Detail & Related papers (2022-12-01T18:59:55Z) - Diverse Human Motion Prediction via Gumbel-Softmax Sampling from an
Auxiliary Space [34.83587750498361]
Diverse human motion prediction aims at predicting multiple possible future pose sequences from a sequence of observed poses.
Previous approaches usually employ deep generative networks to model the conditional distribution of data, and then randomly sample outcomes from the distribution.
We propose a novel sampling strategy for sampling very diverse results from an imbalanced multimodal distribution.
arXiv Detail & Related papers (2022-07-15T09:03:57Z) - Flow Network based Generative Models for Non-Iterative Diverse Candidate
Generation [110.09855163856326]
This paper is about the problem of learning a policy for generating an object from a sequence of actions.
We propose GFlowNet, based on a view of the generative process as a flow network.
We prove that any global minimum of the proposed objectives yields a policy which samples from the desired distribution.
arXiv Detail & Related papers (2021-06-08T14:21:10Z) - Diverse Semantic Image Synthesis via Probability Distribution Modeling [103.88931623488088]
We propose a novel diverse semantic image synthesis framework.
Our method can achieve superior diversity and comparable quality compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-03-11T18:59:25Z) - DLow: Diversifying Latent Flows for Diverse Human Motion Prediction [32.22704734791378]
We propose a novel sampling method, Diversifying Latent Flows (DLow), to produce a diverse set of samples from a pretrained deep generative model.
During training, DLow uses a diversity-promoting prior over samples as an objective to optimize the latent mappings to improve sample diversity.
Our experiments demonstrate that DLow outperforms state-of-the-art baseline methods in terms of sample diversity and accuracy.
arXiv Detail & Related papers (2020-03-18T17:58:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.