Comparative Study of Coupling and Autoregressive Flows through Robust
Statistical Tests
- URL: http://arxiv.org/abs/2302.12024v2
- Date: Tue, 16 Jan 2024 13:49:40 GMT
- Title: Comparative Study of Coupling and Autoregressive Flows through Robust
Statistical Tests
- Authors: Andrea Coccaro and Marco Letizia and Humberto Reyes-Gonzalez and
Riccardo Torre
- Abstract summary: We propose an in-depth comparison of coupling and autoregressive flows, both of the affine and rational quadratic type.
We focus on a set of multimodal target distributions increasing dimensionality ranging from 4 to 400.
Our results indicate that the A-RQS algorithm stands out both in terms of accuracy and training speed.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Normalizing Flows have emerged as a powerful brand of generative models, as
they not only allow for efficient sampling of complicated target distributions,
but also deliver density estimation by construction. We propose here an
in-depth comparison of coupling and autoregressive flows, both of the affine
and rational quadratic spline type, considering four different architectures:
Real-valued Non-Volume Preserving (RealNVP), Masked Autoregressive Flow (MAF),
Coupling Rational Quadratic Spline (C-RQS), and Autoregressive Rational
Quadratic Spline (A-RQS). We focus on a set of multimodal target distributions
of increasing dimensionality ranging from 4 to 400. The performances are
compared by means of different test-statistics for two-sample tests, built from
known distance measures: the sliced Wasserstein distance, the
dimension-averaged one-dimensional Kolmogorov-Smirnov test, and the Frobenius
norm of the difference between correlation matrices. Furthermore, we include
estimations of the variance of both the metrics and the trained models. Our
results indicate that the A-RQS algorithm stands out both in terms of accuracy
and training speed. Nonetheless, all the algorithms are generally able, without
too much fine-tuning, to learn complicated distributions with limited training
data and in a reasonable time, of the order of hours on a Tesla A40 GPU. The
only exception is the C-RQS, which takes significantly longer to train, does
not always provide good accuracy, and becomes unstable for large
dimensionalities. All algorithms have been implemented using TensorFlow2 and
TensorFlow Probability and made available on
\href{https://github.com/NF4HEP/NormalizingFlowsHD}{GitHub}.
Related papers
- RecFlow: An Industrial Full Flow Recommendation Dataset [66.06445386541122]
Industrial recommendation systems rely on the multi-stage pipeline to balance effectiveness and efficiency when delivering items to users.
We introduce RecFlow, an industrial full flow recommendation dataset designed to bridge the gap between offline RS benchmarks and the real online environment.
Our dataset comprises 38M interactions from 42K users across nearly 9M items with additional 1.9B stage samples collected from 9.3M online requests over 37 days and spanning 6 stages.
arXiv Detail & Related papers (2024-10-28T09:36:03Z) - Iterative Methods for Full-Scale Gaussian Process Approximations for Large Spatial Data [9.913418444556486]
We show how iterative methods can be used to reduce the computational costs for calculating likelihoods, gradients, and predictive distributions with FSAs.
We also present a novel, accurate, and fast way to calculate predictive variances relying on estimations and iterative methods.
All methods are implemented in a free C++ software library with high-level Python and R packages.
arXiv Detail & Related papers (2024-05-23T12:25:22Z) - Boundary-aware Decoupled Flow Networks for Realistic Extreme Rescaling [49.215957313126324]
Recently developed generative methods, including invertible rescaling network (IRN) based and generative adversarial network (GAN) based methods, have demonstrated exceptional performance in image rescaling.
However, IRN-based methods tend to produce over-smoothed results, while GAN-based methods easily generate fake details.
We propose Boundary-aware Decoupled Flow Networks (BDFlow) to generate realistic and visually pleasing results.
arXiv Detail & Related papers (2024-05-05T14:05:33Z) - PaddingFlow: Improving Normalizing Flows with Padding-Dimensional Noise [4.762593660623934]
We propose PaddingFlow, a novel dequantization method, which improves normalizing flows with padding-dimensional noise.
We validate our method on the main benchmarks of unconditional density estimation.
The results show that PaddingFlow can perform better in all experiments in this paper.
arXiv Detail & Related papers (2024-03-13T03:28:39Z) - Generative Modeling with Flow-Guided Density Ratio Learning [12.192867460641835]
Flow-Guided Density Ratio Learning (FDRL) is a simple and scalable approach to generative modeling.
We show that FDRL can generate images of dimensions as high as $128times128$, as well as outperform existing gradient flow baselines on quantitative benchmarks.
arXiv Detail & Related papers (2023-03-07T07:55:52Z) - Deep Equilibrium Optical Flow Estimation [80.80992684796566]
Recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms.
These RNNs impose large computation and memory overheads, and are not directly trained to model such stable estimation.
We propose deep equilibrium (DEQ) flow estimators, an approach that directly solves for the flow as the infinite-level fixed point of an implicit layer.
arXiv Detail & Related papers (2022-04-18T17:53:44Z) - SreaMRAK a Streaming Multi-Resolution Adaptive Kernel Algorithm [60.61943386819384]
Existing implementations of KRR require that all the data is stored in the main memory.
We propose StreaMRAK - a streaming version of KRR.
We present a showcase study on two synthetic problems and the prediction of the trajectory of a double pendulum.
arXiv Detail & Related papers (2021-08-23T21:03:09Z) - Learning Optical Flow from a Few Matches [67.83633948984954]
We show that the dense correlation volume representation is redundant and accurate flow estimation can be achieved with only a fraction of elements in it.
Experiments show that our method can reduce computational cost and memory use significantly, while maintaining high accuracy.
arXiv Detail & Related papers (2021-04-05T21:44:00Z) - Self Normalizing Flows [65.73510214694987]
We propose a flexible framework for training normalizing flows by replacing expensive terms in the gradient by learned approximate inverses at each layer.
This reduces the computational complexity of each layer's exact update from $mathcalO(D3)$ to $mathcalO(D2)$.
We show experimentally that such models are remarkably stable and optimize to similar data likelihood values as their exact gradient counterparts.
arXiv Detail & Related papers (2020-11-14T09:51:51Z) - OT-Flow: Fast and Accurate Continuous Normalizing Flows via Optimal
Transport [8.468007443062751]
A normalizing flow is an invertible mapping between an arbitrary probability distribution and a standard normal distribution.
OT-Flow tackles two critical computational challenges that limit a more widespread use of CNFs.
On five high-dimensional density estimation and generative modeling tasks, OT-Flow performs competitively to state-of-the-art CNFs.
arXiv Detail & Related papers (2020-05-29T22:31:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.