Multisample Flow Matching: Straightening Flows with Minibatch Couplings
- URL: http://arxiv.org/abs/2304.14772v2
- Date: Wed, 24 May 2023 18:17:17 GMT
- Title: Multisample Flow Matching: Straightening Flows with Minibatch Couplings
- Authors: Aram-Alexandre Pooladian, Heli Ben-Hamu, Carles Domingo-Enrich,
Brandon Amos, Yaron Lipman, and Ricky T. Q. Chen
- Abstract summary: Simulation-free methods for training continuous-time generative models construct probability paths that go between noise distributions and individual data samples.
We propose Multisample Flow Matching, a more general framework that uses non-trivial couplings between data and noise samples.
We show that our proposed methods improve sample consistency on downsampled ImageNet data sets, and lead to better low-cost sample generation.
- Score: 38.82598694134521
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Simulation-free methods for training continuous-time generative models
construct probability paths that go between noise distributions and individual
data samples. Recent works, such as Flow Matching, derived paths that are
optimal for each data sample. However, these algorithms rely on independent
data and noise samples, and do not exploit underlying structure in the data
distribution for constructing probability paths. We propose Multisample Flow
Matching, a more general framework that uses non-trivial couplings between data
and noise samples while satisfying the correct marginal constraints. At very
small overhead costs, this generalization allows us to (i) reduce gradient
variance during training, (ii) obtain straighter flows for the learned vector
field, which allows us to generate high-quality samples using fewer function
evaluations, and (iii) obtain transport maps with lower cost in high
dimensions, which has applications beyond generative modeling. Importantly, we
do so in a completely simulation-free manner with a simple minimization
objective. We show that our proposed methods improve sample consistency on
downsampled ImageNet data sets, and lead to better low-cost sample generation.
Related papers
- Single-Step Consistent Diffusion Samplers [8.758218443992467]
Existing sampling algorithms typically require many iterative steps to produce high-quality samples.
We introduce consistent diffusion samplers, a new class of samplers designed to generate high-fidelity samples in a single step.
We show that our approach yields high-fidelity samples using less than 1% of the network evaluations required by traditional diffusion samplers.
arXiv Detail & Related papers (2025-02-11T14:25:52Z) - Neural Flow Samplers with Shortcut Models [19.81513273510523]
Flow-based samplers generate samples by learning a velocity field that satisfies the continuity equation.
While importance sampling provides an approximation, it suffers from high variance.
arXiv Detail & Related papers (2025-02-11T07:55:41Z) - Distributional Diffusion Models with Scoring Rules [83.38210785728994]
Diffusion models generate high-quality synthetic data.
generating high-quality outputs requires many discretization steps.
We propose to accomplish sample generation by learning the posterior em distribution of clean data samples.
arXiv Detail & Related papers (2025-02-04T16:59:03Z) - Local Flow Matching Generative Models [19.859984725284896]
Local Flow Matching is a computational framework for density estimation based on flow-based generative models.
$textttLFM$ employs a simulation-free scheme and incrementally learns a sequence of Flow Matching sub-models.
We demonstrate the improved training efficiency and competitive generative performance of $textttLFM$ compared to FM.
arXiv Detail & Related papers (2024-10-03T14:53:10Z) - Diffusion Generative Flow Samplers: Improving learning signals through
partial trajectory optimization [87.21285093582446]
Diffusion Generative Flow Samplers (DGFS) is a sampling-based framework where the learning process can be tractably broken down into short partial trajectory segments.
Our method takes inspiration from the theory developed for generative flow networks (GFlowNets)
arXiv Detail & Related papers (2023-10-04T09:39:05Z) - ScoreMix: A Scalable Augmentation Strategy for Training GANs with
Limited Data [93.06336507035486]
Generative Adversarial Networks (GANs) typically suffer from overfitting when limited training data is available.
We present ScoreMix, a novel and scalable data augmentation approach for various image synthesis tasks.
arXiv Detail & Related papers (2022-10-27T02:55:15Z) - POODLE: Improving Few-shot Learning via Penalizing Out-of-Distribution
Samples [19.311470287767385]
We propose to use out-of-distribution samples, i.e., unlabeled samples coming from outside the target classes, to improve few-shot learning.
Our approach is simple to implement, agnostic to feature extractors, lightweight without any additional cost for pre-training, and applicable to both inductive and transductive settings.
arXiv Detail & Related papers (2022-06-08T18:59:21Z) - Learn from Unpaired Data for Image Restoration: A Variational Bayes
Approach [18.007258270845107]
We propose LUD-VAE, a deep generative method to learn the joint probability density function from data sampled from marginal distributions.
We apply our method to real-world image denoising and super-resolution tasks and train the models using the synthetic data generated by the LUD-VAE.
arXiv Detail & Related papers (2022-04-21T13:27:17Z) - Unrolling Particles: Unsupervised Learning of Sampling Distributions [102.72972137287728]
Particle filtering is used to compute good nonlinear estimates of complex systems.
We show in simulations that the resulting particle filter yields good estimates in a wide range of scenarios.
arXiv Detail & Related papers (2021-10-06T16:58:34Z) - Bandit Samplers for Training Graph Neural Networks [63.17765191700203]
Several sampling algorithms with variance reduction have been proposed for accelerating the training of Graph Convolution Networks (GCNs)
These sampling algorithms are not applicable to more general graph neural networks (GNNs) where the message aggregator contains learned weights rather than fixed weights, such as Graph Attention Networks (GAT)
arXiv Detail & Related papers (2020-06-10T12:48:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.