Densely connected normalizing flows
- URL: http://arxiv.org/abs/2106.04627v1
- Date: Tue, 8 Jun 2021 18:24:41 GMT
- Title: Densely connected normalizing flows
- Authors: Matej Grci\'c, Ivan Grubi\v{s}i\'c, Sini\v{s}a \v{S}egvi\'c
- Abstract summary: Normalizing flows are mappings between inputs and latent representations with a fully factorized distribution.
We address this issue by incrementally padding intermediate representations with noise.
Our invertible glow-like modules express intra-unit affine coupling as a fusion of a densely connected block and Nystr"om self-attention.
Experiments show significant improvements due to the proposed contributions, and reveal state-of-the-art density estimation among all generative models under moderate computing budgets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Normalizing flows are bijective mappings between inputs and latent
representations with a fully factorized distribution. They are very attractive
due to exact likelihood evaluation and efficient sampling. However, their
effective capacity is often insufficient since the bijectivity constraint
limits the model width. We address this issue by incrementally padding
intermediate representations with noise. We precondition the noise in
accordance with previous invertible units, which we describe as cross-unit
coupling. Our invertible glow-like modules express intra-unit affine coupling
as a fusion of a densely connected block and Nystr\"om self-attention. We refer
to our architecture as DenseFlow since both cross-unit and intra-unit couplings
rely on dense connectivity. Experiments show significant improvements due to
the proposed contributions, and reveal state-of-the-art density estimation
among all generative models under moderate computing budgets.
Related papers
- Rectified Diffusion Guidance for Conditional Generation [62.00207951161297]
We revisit the theory behind CFG and rigorously confirm that the improper configuration of the combination coefficients (i.e., the widely used summing-to-one version) brings about expectation shift of the generative distribution.
We propose ReCFG with a relaxation on the guidance coefficients such that denoising with ReCFG strictly aligns with the diffusion theory.
That way the rectified coefficients can be readily pre-computed via traversing the observed data, leaving the sampling speed barely affected.
arXiv Detail & Related papers (2024-10-24T13:41:32Z) - Augmented Bridge Matching [32.668433085737036]
Flow and bridge matching processes can interpolate between arbitrary data distributions.
We show that a simple modification of the matching process recovers this coupling by augmenting the velocity field.
We illustrate the efficiency of our augmentation in learning mixture of image translation tasks.
arXiv Detail & Related papers (2023-11-12T22:42:34Z) - DistractFlow: Improving Optical Flow Estimation via Realistic
Distractions and Pseudo-Labeling [49.46842536813477]
We propose a novel data augmentation approach, DistractFlow, for training optical flow estimation models.
We combine one of the frames in the pair with a distractor image depicting a similar domain, which allows for inducing visual perturbations congruent with natural objects and scenes.
Our approach allows increasing the number of available training pairs significantly without requiring additional annotations.
arXiv Detail & Related papers (2023-03-24T15:42:54Z) - Estimating Higher-Order Mixed Memberships via the $\ell_{2,\infty}$
Tensor Perturbation Bound [8.521132000449766]
We propose the tensor mixed-membership blockmodel, a generalization of the tensor blockmodel.
We establish the identifiability of our model and propose a computationally efficient estimation procedure.
We apply our methodology to real and simulated data, demonstrating some effects not identifiable from the model with discrete community memberships.
arXiv Detail & Related papers (2022-12-16T18:32:20Z) - Integrative Feature and Cost Aggregation with Transformers for Dense
Correspondence [63.868905184847954]
The current state-of-the-art are Transformer-based approaches that focus on either feature descriptors or cost volume aggregation.
We propose a novel Transformer-based network that interleaves both forms of aggregations in a way that exploits their complementary information.
We evaluate the effectiveness of the proposed method on dense matching tasks and achieve state-of-the-art performance on all the major benchmarks.
arXiv Detail & Related papers (2022-09-19T03:33:35Z) - Subspace Diffusion Generative Models [4.310834990284412]
Score-based models generate samples by mapping noise to data (and vice versa) via a high-dimensional diffusion process.
We restrict the diffusion via projections onto subspaces as the data distribution evolves toward noise.
Our framework is fully compatible with continuous-time diffusion and retains its flexible capabilities.
arXiv Detail & Related papers (2022-05-03T13:43:47Z) - Attentive Contractive Flow with Lipschitz-constrained Self-Attention [25.84621883831624]
We introduce a novel approach called Attentive Contractive Flow (ACF)
ACF utilizes a special category of flow-based generative models - contractive flows.
We demonstrate that ACF can be introduced into a variety of state of the art flow models in a plug-and-play manner.
arXiv Detail & Related papers (2021-09-24T18:02:49Z) - Generative Flows with Invertible Attentions [135.23766216657745]
We introduce two types of invertible attention mechanisms for generative flow models.
We exploit split-based attention mechanisms to learn the attention weights and input representations on every two splits of flow feature maps.
Our method provides invertible attention modules with tractable Jacobian determinants, enabling seamless integration of it at any positions of the flow-based models.
arXiv Detail & Related papers (2021-06-07T20:43:04Z) - Learning Optical Flow from a Few Matches [67.83633948984954]
We show that the dense correlation volume representation is redundant and accurate flow estimation can be achieved with only a fraction of elements in it.
Experiments show that our method can reduce computational cost and memory use significantly, while maintaining high accuracy.
arXiv Detail & Related papers (2021-04-05T21:44:00Z) - Learning Likelihoods with Conditional Normalizing Flows [54.60456010771409]
Conditional normalizing flows (CNFs) are efficient in sampling and inference.
We present a study of CNFs where the base density to output space mapping is conditioned on an input x, to model conditional densities p(y|x)
arXiv Detail & Related papers (2019-11-29T19:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.