Nonparametric Generative Modeling with Conditional Sliced-Wasserstein
Flows
- URL: http://arxiv.org/abs/2305.02164v3
- Date: Tue, 25 Jul 2023 09:11:48 GMT
- Title: Nonparametric Generative Modeling with Conditional Sliced-Wasserstein
Flows
- Authors: Chao Du, Tianbo Li, Tianyu Pang, Shuicheng Yan, Min Lin
- Abstract summary: Sliced-Wasserstein Flow (SWF) is a promising approach to nonparametric generative modeling but has not been widely adopted due to its suboptimal generative quality and lack of conditional modeling capabilities.
We propose Conditional Sliced-Wasserstein Flow (CSWF), a simple yet effective extension of SWF that enables nonparametric conditional modeling.
- Score: 101.31862036510701
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sliced-Wasserstein Flow (SWF) is a promising approach to nonparametric
generative modeling but has not been widely adopted due to its suboptimal
generative quality and lack of conditional modeling capabilities. In this work,
we make two major contributions to bridging this gap. First, based on a
pleasant observation that (under certain conditions) the SWF of joint
distributions coincides with those of conditional distributions, we propose
Conditional Sliced-Wasserstein Flow (CSWF), a simple yet effective extension of
SWF that enables nonparametric conditional modeling. Second, we introduce
appropriate inductive biases of images into SWF with two techniques inspired by
local connectivity and multiscale representation in vision research, which
greatly improve the efficiency and quality of modeling images. With all the
improvements, we achieve generative performance comparable with many deep
parametric generative models on both conditional and unconditional tasks in a
purely nonparametric fashion, demonstrating its great potential.
Related papers
- EMR-Merging: Tuning-Free High-Performance Model Merging [55.03509900949149]
We show that Elect, Mask & Rescale-Merging (EMR-Merging) shows outstanding performance compared to existing merging methods.
EMR-Merging is tuning-free, thus requiring no data availability or any additional training while showing impressive performance.
arXiv Detail & Related papers (2024-05-23T05:25:45Z) - Poisson flow consistency models for low-dose CT image denoising [3.6218104434936658]
We introduce a novel image denoising technique which combines the flexibility afforded in Poisson flow generative models (PFGM)++ with the, high quality, single step sampling of consistency models.
Our results indicate that the added flexibility of tuning the hyper parameter D, the dimensionality of the augmentation variables in PFGM++, allows us to outperform consistency models.
arXiv Detail & Related papers (2024-02-13T01:39:56Z) - Guided Flows for Generative Modeling and Decision Making [55.42634941614435]
We show that Guided Flows significantly improves the sample quality in conditional image generation and zero-shot text synthesis-to-speech.
Notably, we are first to apply flow models for plan generation in the offline reinforcement learning setting ax speedup in compared to diffusion models.
arXiv Detail & Related papers (2023-11-22T15:07:59Z) - A Bayesian Non-parametric Approach to Generative Models: Integrating
Variational Autoencoder and Generative Adversarial Networks using Wasserstein
and Maximum Mean Discrepancy [2.966338139852619]
Generative adversarial networks (GANs) and variational autoencoders (VAEs) are two of the most prominent and widely studied generative models.
We employ a Bayesian non-parametric (BNP) approach to merge GANs and VAEs.
By fusing the discriminative power of GANs with the reconstruction capabilities of VAEs, our novel model achieves superior performance in various generative tasks.
arXiv Detail & Related papers (2023-08-27T08:58:31Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - Conditional Generation from Unconditional Diffusion Models using
Denoiser Representations [94.04631421741986]
We propose adapting pre-trained unconditional diffusion models to new conditions using the learned internal representations of the denoiser network.
We show that augmenting the Tiny ImageNet training set with synthetic images generated by our approach improves the classification accuracy of ResNet baselines by up to 8%.
arXiv Detail & Related papers (2023-06-02T20:09:57Z) - DiffuseVAE: Efficient, Controllable and High-Fidelity Generation from
Low-Dimensional Latents [26.17940552906923]
We present DiffuseVAE, a novel generative framework that integrates VAE within a diffusion model framework.
We show that the proposed model can generate high-resolution samples and exhibits quality comparable to state-of-the-art models on standard benchmarks.
arXiv Detail & Related papers (2022-01-02T06:44:23Z) - Normalizing Flows with Multi-Scale Autoregressive Priors [131.895570212956]
We introduce channel-wise dependencies in their latent space through multi-scale autoregressive priors (mAR)
Our mAR prior for models with split coupling flow layers (mAR-SCF) can better capture dependencies in complex multimodal data.
We show that mAR-SCF allows for improved image generation quality, with gains in FID and Inception scores compared to state-of-the-art flow-based models.
arXiv Detail & Related papers (2020-04-08T09:07:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.