Conditional Permutation Invariant Flows
- URL: http://arxiv.org/abs/2206.09021v1
- Date: Fri, 17 Jun 2022 21:43:38 GMT
- Title: Conditional Permutation Invariant Flows
- Authors: Berend Zwartsenberg, Adam \'Scibior, Matthew Niedoba, Vasileios
Lioutas, Yunpeng Liu, Justice Sefas, Setareh Dabiri, Jonathan Wilder
Lavington, Trevor Campbell, Frank Wood
- Abstract summary: We present a conditional generative probabilistic model of set-valued data with a tractable log density.
These dynamics are driven by a learnable per-set-element term and pairwise interactions, both parametrized by deep neural networks.
We illustrate the utility of this model via applications including (1) complex traffic scene generation conditioned on visually specified map information, and (2) object bounding box generation conditioned directly on images.
- Score: 23.740061786510417
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present a novel, conditional generative probabilistic model of set-valued
data with a tractable log density. This model is a continuous normalizing flow
governed by permutation equivariant dynamics. These dynamics are driven by a
learnable per-set-element term and pairwise interactions, both parametrized by
deep neural networks. We illustrate the utility of this model via applications
including (1) complex traffic scene generation conditioned on visually
specified map information, and (2) object bounding box generation conditioned
directly on images. We train our model by maximizing the expected likelihood of
labeled conditional data under our flow, with the aid of a penalty that ensures
the dynamics are smooth and hence efficiently solvable. Our method
significantly outperforms non-permutation invariant baselines in terms of log
likelihood and domain-specific metrics (offroad, collision, and combined
infractions), yielding realistic samples that are difficult to distinguish from
real data.
Related papers
- Dynamical Measure Transport and Neural PDE Solvers for Sampling [77.38204731939273]
We tackle the task of sampling from a probability density as transporting a tractable density function to the target.
We employ physics-informed neural networks (PINNs) to approximate the respective partial differential equations (PDEs) solutions.
PINNs allow for simulation- and discretization-free optimization and can be trained very efficiently.
arXiv Detail & Related papers (2024-07-10T17:39:50Z) - On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - Conditional Pseudo-Reversible Normalizing Flow for Surrogate Modeling in Quantifying Uncertainty Propagation [11.874729463016227]
We introduce a conditional pseudo-reversible normalizing flow for constructing surrogate models of a physical model polluted by additive noise.
The training process utilizes dataset consisting of input-output pairs without requiring prior knowledge about the noise and the function.
Our model, once trained, can generate samples from any conditional probability density functions whose high probability regions are covered by the training set.
arXiv Detail & Related papers (2024-03-31T00:09:58Z) - Stochastic interpolants with data-dependent couplings [31.854717378556334]
We use the framework of interpolants to formalize how to itcouple the base and the target densities.
We show that these transport maps can be learned by solving a simple square loss regression problem analogous to the standard independent setting.
arXiv Detail & Related papers (2023-10-05T17:46:31Z) - Nonlinear Isometric Manifold Learning for Injective Normalizing Flows [58.720142291102135]
We use isometries to separate manifold learning and density estimation.
We also employ autoencoders to design embeddings with explicit inverses that do not distort the probability distribution.
arXiv Detail & Related papers (2022-03-08T08:57:43Z) - Discrete Denoising Flows [87.44537620217673]
We introduce a new discrete flow-based model for categorical random variables: Discrete Denoising Flows (DDFs)
In contrast with other discrete flow-based models, our model can be locally trained without introducing gradient bias.
We show that DDFs outperform Discrete Flows on modeling a toy example, binary MNIST and Cityscapes segmentation maps, measured in log-likelihood.
arXiv Detail & Related papers (2021-07-24T14:47:22Z) - Low-Rank Hankel Tensor Completion for Traffic Speed Estimation [7.346671461427793]
We propose a purely data-driven and model-free solution to the traffic state estimation problem.
By imposing a low-rank assumption on this tensor structure, we can approximate characterize both global patterns and the unknown complex local dynamics.
We conduct numerical experiments on both synthetic simulation data and real-world high-resolution data, and our results demonstrate the effectiveness and superiority of the proposed model.
arXiv Detail & Related papers (2021-05-21T00:08:06Z) - Autoregressive Score Matching [113.4502004812927]
We propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariable log-conditionals (scores)
For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training.
We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
arXiv Detail & Related papers (2020-10-24T07:01:24Z) - Variational Mixture of Normalizing Flows [0.0]
Deep generative models, such as generative adversarial networks autociteGAN, variational autoencoders autocitevaepaper, and their variants, have seen wide adoption for the task of modelling complex data distributions.
Normalizing flows have overcome this limitation by leveraging the change-of-suchs formula for probability density functions.
The present work overcomes this by using normalizing flows as components in a mixture model and devising an end-to-end training procedure for such a model.
arXiv Detail & Related papers (2020-09-01T17:20:08Z) - Modeling Continuous Stochastic Processes with Dynamic Normalizing Flows [40.9137348900942]
We propose a novel type of flow driven by a differential deformation of the Wiener process.
As a result, we obtain a rich time series model whose observable process inherits many of the appealing properties of its base process.
arXiv Detail & Related papers (2020-02-24T20:13:43Z) - Learning Likelihoods with Conditional Normalizing Flows [54.60456010771409]
Conditional normalizing flows (CNFs) are efficient in sampling and inference.
We present a study of CNFs where the base density to output space mapping is conditioned on an input x, to model conditional densities p(y|x)
arXiv Detail & Related papers (2019-11-29T19:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.