Jet: A Modern Transformer-Based Normalizing Flow
- URL: http://arxiv.org/abs/2412.15129v1
- Date: Thu, 19 Dec 2024 18:09:42 GMT
- Title: Jet: A Modern Transformer-Based Normalizing Flow
- Authors: Alexander Kolesnikov, André Susano Pinto, Michael Tschannen,
- Abstract summary: We revisit the design of the coupling-based normalizing flow models by carefully ablating prior design choices.
We achieve state-of-the-art quantitative and qualitative performance with a much simpler architecture.
- Score: 62.2573739835562
- License:
- Abstract: In the past, normalizing generative flows have emerged as a promising class of generative models for natural images. This type of model has many modeling advantages: the ability to efficiently compute log-likelihood of the input data, fast generation and simple overall structure. Normalizing flows remained a topic of active research but later fell out of favor, as visual quality of the samples was not competitive with other model classes, such as GANs, VQ-VAE-based approaches or diffusion models. In this paper we revisit the design of the coupling-based normalizing flow models by carefully ablating prior design choices and using computational blocks based on the Vision Transformer architecture, not convolutional neural networks. As a result, we achieve state-of-the-art quantitative and qualitative performance with a much simpler architecture. While the overall visual quality is still behind the current state-of-the-art models, we argue that strong normalizing flow models can help advancing research frontier by serving as building components of more powerful generative models.
Related papers
- Flow Generator Matching [35.371071097381346]
Flow Generator Matching (FGM) is designed to accelerate the sampling of flow-matching models into a one-step generation.
On the CIFAR10 unconditional generation benchmark, our one-step FGM model achieves a new record Fr'echet Inception Distance (FID) score of 3.08.
MM-DiT-FGM one-step text-to-image model demonstrates outstanding industry-level performance.
arXiv Detail & Related papers (2024-10-25T05:41:28Z) - Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models [54.132297393662654]
We introduce a hybrid method that fine-tunes cutting-edge diffusion models by optimizing reward models through RL.
We demonstrate the capability of our approach to outperform the best designs in offline data, leveraging the extrapolation capabilities of reward models.
arXiv Detail & Related papers (2024-05-30T03:57:29Z) - Guided Flows for Generative Modeling and Decision Making [55.42634941614435]
We show that Guided Flows significantly improves the sample quality in conditional image generation and zero-shot text synthesis-to-speech.
Notably, we are first to apply flow models for plan generation in the offline reinforcement learning setting ax speedup in compared to diffusion models.
arXiv Detail & Related papers (2023-11-22T15:07:59Z) - Re-parameterizing Your Optimizers rather than Architectures [119.08740698936633]
We propose a novel paradigm of incorporating model-specific prior knowledge into Structurals and using them to train generic (simple) models.
As an implementation, we propose a novel methodology to add prior knowledge by modifying the gradients according to a set of model-specific hyper- parameters.
For a simple model trained with a Repr, we focus on a VGG-style plain model and showcase that such a simple model trained with a Repr, which is referred to as Rep-VGG, performs on par with the recent well-designed models.
arXiv Detail & Related papers (2022-05-30T16:55:59Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - Distilling the Knowledge from Normalizing Flows [22.578033953780697]
Normalizing flows are a powerful class of generative models demonstrating strong performance in several speech and vision problems.
We propose a simple distillation approach and demonstrate its effectiveness on state-of-the-art conditional flow-based models for image super-resolution and speech synthesis.
arXiv Detail & Related papers (2021-06-24T00:10:22Z) - Normalizing Flows with Multi-Scale Autoregressive Priors [131.895570212956]
We introduce channel-wise dependencies in their latent space through multi-scale autoregressive priors (mAR)
Our mAR prior for models with split coupling flow layers (mAR-SCF) can better capture dependencies in complex multimodal data.
We show that mAR-SCF allows for improved image generation quality, with gains in FID and Inception scores compared to state-of-the-art flow-based models.
arXiv Detail & Related papers (2020-04-08T09:07:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.