Free-form Flows: Make Any Architecture a Normalizing Flow
- URL: http://arxiv.org/abs/2310.16624v2
- Date: Wed, 24 Apr 2024 10:05:18 GMT
- Title: Free-form Flows: Make Any Architecture a Normalizing Flow
- Authors: Felix Draxler, Peter Sorrenson, Lea Zimmermann, Armand Rousselot, Ullrich Köthe,
- Abstract summary: We develop a training procedure that uses an efficient estimator for the gradient of the change of variables formula.
This enables any dimension-preserving neural network to serve as a generative model through maximum likelihood training.
We achieve excellent results in molecule generation benchmarks utilizing $E(n)$-equivariant networks.
- Score: 8.163244519983298
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Normalizing Flows are generative models that directly maximize the likelihood. Previously, the design of normalizing flows was largely constrained by the need for analytical invertibility. We overcome this constraint by a training procedure that uses an efficient estimator for the gradient of the change of variables formula. This enables any dimension-preserving neural network to serve as a generative model through maximum likelihood training. Our approach allows placing the emphasis on tailoring inductive biases precisely to the task at hand. Specifically, we achieve excellent results in molecule generation benchmarks utilizing $E(n)$-equivariant networks. Moreover, our method is competitive in an inverse problem benchmark, while employing off-the-shelf ResNet architectures.
Related papers
- Training Deep Learning Models with Norm-Constrained LMOs [56.00317694850397]
We study optimization methods that leverage the linear minimization oracle (LMO) over a norm-ball.
We propose a new family of algorithms that uses the LMO to adapt to the geometry of the problem and, perhaps surprisingly, show that they can be applied to unconstrained problems.
arXiv Detail & Related papers (2025-02-11T13:10:34Z) - Jet: A Modern Transformer-Based Normalizing Flow [62.2573739835562]
We revisit the design of the coupling-based normalizing flow models by carefully ablating prior design choices.
We achieve state-of-the-art quantitative and qualitative performance with a much simpler architecture.
arXiv Detail & Related papers (2024-12-19T18:09:42Z) - Guided Flows for Generative Modeling and Decision Making [55.42634941614435]
We show that Guided Flows significantly improves the sample quality in conditional image generation and zero-shot text synthesis-to-speech.
Notably, we are first to apply flow models for plan generation in the offline reinforcement learning setting ax speedup in compared to diffusion models.
arXiv Detail & Related papers (2023-11-22T15:07:59Z) - Kernelised Normalising Flows [10.31916245015817]
Normalising Flows are non-parametric statistical models characterised by their dual capabilities of density estimation and generation.
We present Ferumal flow, a novel kernelised normalising flow paradigm that integrates kernels into the framework.
arXiv Detail & Related papers (2023-07-27T13:18:52Z) - Training Energy-Based Normalizing Flow with Score-Matching Objectives [36.0810550035231]
We present a new flow-based modeling approach called energy-based normalizing flow (EBFlow)
We demonstrate that by optimizing EBFlow with score-matching objectives, the computation of Jacobian determinants for linear transformations can be entirely bypassed.
arXiv Detail & Related papers (2023-05-24T15:54:29Z) - Normalizing flow neural networks by JKO scheme [22.320632565424745]
We develop a neural ODE flow network called JKO-iFlow, inspired by the Jordan-Kinderleherer-Otto scheme.
The proposed method stacks residual blocks one after another, allowing efficient block-wise training of the residual blocks.
Experiments with synthetic and real data show that the proposed JKO-iFlow network achieves competitive performance.
arXiv Detail & Related papers (2022-12-29T18:55:00Z) - Invertible Monotone Operators for Normalizing Flows [7.971699294672282]
Normalizing flows model probability by learning invertible transformations that transfer a simple distribution into complex distributions.
We propose the monotone formulation to overcome the issue of the Lipschitz constants using monotone operators.
The resulting model, Monotone Flows, exhibits an excellent performance on multiple density estimation benchmarks.
arXiv Detail & Related papers (2022-10-15T03:40:46Z) - Deep Equilibrium Optical Flow Estimation [80.80992684796566]
Recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms.
These RNNs impose large computation and memory overheads, and are not directly trained to model such stable estimation.
We propose deep equilibrium (DEQ) flow estimators, an approach that directly solves for the flow as the infinite-level fixed point of an implicit layer.
arXiv Detail & Related papers (2022-04-18T17:53:44Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - Self Normalizing Flows [65.73510214694987]
We propose a flexible framework for training normalizing flows by replacing expensive terms in the gradient by learned approximate inverses at each layer.
This reduces the computational complexity of each layer's exact update from $mathcalO(D3)$ to $mathcalO(D2)$.
We show experimentally that such models are remarkably stable and optimize to similar data likelihood values as their exact gradient counterparts.
arXiv Detail & Related papers (2020-11-14T09:51:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.