Augmented Bridge Matching
- URL: http://arxiv.org/abs/2311.06978v1
- Date: Sun, 12 Nov 2023 22:42:34 GMT
- Title: Augmented Bridge Matching
- Authors: Valentin De Bortoli, Guan-Horng Liu, Tianrong Chen, Evangelos A.
Theodorou, Weilie Nie
- Abstract summary: Flow and bridge matching processes can interpolate between arbitrary data distributions.
We show that a simple modification of the matching process recovers this coupling by augmenting the velocity field.
We illustrate the efficiency of our augmentation in learning mixture of image translation tasks.
- Score: 32.668433085737036
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Flow and bridge matching are a novel class of processes which encompass
diffusion models. One of the main aspect of their increased flexibility is that
these models can interpolate between arbitrary data distributions i.e. they
generalize beyond generative modeling and can be applied to learning stochastic
(and deterministic) processes of arbitrary transfer tasks between two given
distributions. In this paper, we highlight that while flow and bridge matching
processes preserve the information of the marginal distributions, they do
\emph{not} necessarily preserve the coupling information unless additional,
stronger optimality conditions are met. This can be problematic if one aims at
preserving the original empirical pairing. We show that a simple modification
of the matching process recovers this coupling by augmenting the velocity field
(or drift) with the information of the initial sample point. Doing so, we lose
the Markovian property of the process but preserve the coupling information
between distributions. We illustrate the efficiency of our augmentation in
learning mixture of image translation tasks.
Related papers
- Information-Theoretic Proofs for Diffusion Sampling [13.095978794717007]
This paper provides an elementary, self-contained analysis of diffusion-based sampling methods for generative modeling.
We show that, if the diffusion step sizes are chosen sufficiently small, then the sampling distribution is provably close to the target distribution.
Our results also provide a transparent view on how to accelerate convergence by introducing additional randomness in each step to match higher order moments in the comparison process.
arXiv Detail & Related papers (2025-02-04T13:19:21Z) - Flow Matching: Markov Kernels, Stochastic Processes and Transport Plans [1.9766522384767222]
Flow matching techniques can be used to solve inverse problems.
We show how flow matching can be used for solving inverse problems.
We briefly address continuous normalizing flows and score matching techniques.
arXiv Detail & Related papers (2025-01-28T10:28:17Z) - Flowing from Words to Pixels: A Framework for Cross-Modality Evolution [14.57591222028278]
We present a general and simple framework, CrossFlow, for cross-modal flow matching.
We show the importance of applying Variationals to the input data, and introduce a method to enable-free guidance.
To demonstrate the generalizability of our approach, we also show that CrossFlow is on par with or outperforms the state-of-the-art for various cross-modal / intramodal mapping tasks.
arXiv Detail & Related papers (2024-12-19T18:59:56Z) - A Complete Decomposition of KL Error using Refined Information and Mode Interaction Selection [11.994525728378603]
We revisit the classical formulation of the log-linear model with a focus on higher-order mode interactions.
We find that our learned distributions are able to more efficiently use the finite amount of data which is available in practice.
arXiv Detail & Related papers (2024-10-15T18:08:32Z) - InterHandGen: Two-Hand Interaction Generation via Cascaded Reverse Diffusion [53.90516061351706]
We present InterHandGen, a novel framework that learns the generative prior of two-hand interaction.
For sampling, we combine anti-penetration and synthesis-free guidance to enable plausible generation.
Our method significantly outperforms baseline generative models in terms of plausibility and diversity.
arXiv Detail & Related papers (2024-03-26T06:35:55Z) - Theoretical Insights for Diffusion Guidance: A Case Study for Gaussian
Mixture Models [59.331993845831946]
Diffusion models benefit from instillation of task-specific information into the score function to steer the sample generation towards desired properties.
This paper provides the first theoretical study towards understanding the influence of guidance on diffusion models in the context of Gaussian mixture models.
arXiv Detail & Related papers (2024-03-03T23:15:48Z) - Structural Pruning for Diffusion Models [65.02607075556742]
We present Diff-Pruning, an efficient compression method tailored for learning lightweight diffusion models from pre-existing ones.
Our empirical assessment, undertaken across several datasets highlights two primary benefits of our proposed method.
arXiv Detail & Related papers (2023-05-18T12:38:21Z) - Kernel-Whitening: Overcome Dataset Bias with Isotropic Sentence
Embedding [51.48582649050054]
We propose a representation normalization method which aims at disentangling the correlations between features of encoded sentences.
We also propose Kernel-Whitening, a Nystrom kernel approximation method to achieve more thorough debiasing on nonlinear spurious correlations.
Experiments show that Kernel-Whitening significantly improves the performance of BERT on out-of-distribution datasets while maintaining in-distribution accuracy.
arXiv Detail & Related papers (2022-10-14T05:56:38Z) - Contrastive learning of strong-mixing continuous-time stochastic
processes [53.82893653745542]
Contrastive learning is a family of self-supervised methods where a model is trained to solve a classification task constructed from unlabeled data.
We show that a properly constructed contrastive learning task can be used to estimate the transition kernel for small-to-mid-range intervals in the diffusion case.
arXiv Detail & Related papers (2021-03-03T23:06:47Z) - Robust model training and generalisation with Studentising flows [22.757298187704745]
We discuss how these methods can be further improved based on insights from robust (in particular, resistant) statistics.
We propose to endow flow-based models with fat-tailed latent distributions as a simple drop-in replacement for the Gaussian distribution.
Experiments on several different datasets confirm the efficacy of the proposed approach.
arXiv Detail & Related papers (2020-06-11T16:47:01Z) - Embedding Propagation: Smoother Manifold for Few-Shot Classification [131.81692677836202]
We propose to use embedding propagation as an unsupervised non-parametric regularizer for manifold smoothing in few-shot classification.
We empirically show that embedding propagation yields a smoother embedding manifold.
We show that embedding propagation consistently improves the accuracy of the models in multiple semi-supervised learning scenarios by up to 16% points.
arXiv Detail & Related papers (2020-03-09T13:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.