RG-Flow: A hierarchical and explainable flow model based on
renormalization group and sparse prior
- URL: http://arxiv.org/abs/2010.00029v5
- Date: Mon, 15 Aug 2022 09:50:27 GMT
- Title: RG-Flow: A hierarchical and explainable flow model based on
renormalization group and sparse prior
- Authors: Hong-Ye Hu, Dian Wu, Yi-Zhuang You, Bruno Olshausen, Yubei Chen
- Abstract summary: Flow-based generative models have become an important class of unsupervised learning approaches.
In this work, we incorporate the key ideas of renormalization group (RG) and sparse prior distribution to design a hierarchical flow-based generative model, RG-Flow.
Our proposed method has $O(log L)$ complexity for inpainting of an image with edge length $L$, compared to previous generative models with $O(L2)$ complexity.
- Score: 2.274915755738124
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Flow-based generative models have become an important class of unsupervised
learning approaches. In this work, we incorporate the key ideas of
renormalization group (RG) and sparse prior distribution to design a
hierarchical flow-based generative model, RG-Flow, which can separate
information at different scales of images and extract disentangled
representations at each scale. We demonstrate our method on synthetic
multi-scale image datasets and the CelebA dataset, showing that the
disentangled representations enable semantic manipulation and style mixing of
the images at different scales. To visualize the latent representations, we
introduce receptive fields for flow-based models and show that the receptive
fields of RG-Flow are similar to those of convolutional neural networks. In
addition, we replace the widely adopted isotropic Gaussian prior distribution
by the sparse Laplacian distribution to further enhance the disentanglement of
representations. From a theoretical perspective, our proposed method has
$O(\log L)$ complexity for inpainting of an image with edge length $L$,
compared to previous generative models with $O(L^2)$ complexity.
Related papers
- Structured Generations: Using Hierarchical Clusters to guide Diffusion Models [12.618079575423868]
This paper introduces Diffuse-TreeVAE, a deep generative model that integrates hierarchical clustering into the framework of Denoising Diffusion Probabilistic Models (DDPMs)
The proposed approach generates new images by sampling from a root embedding of a learned latent tree VAE-based structure, it then propagates through hierarchical paths, and utilizes a second-stage DDPM to refine and generate distinct, high-quality images for each data cluster.
arXiv Detail & Related papers (2024-07-08T17:00:28Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - Semantic Image Synthesis via Diffusion Models [159.4285444680301]
Denoising Diffusion Probabilistic Models (DDPMs) have achieved remarkable success in various image generation tasks.
Recent work on semantic image synthesis mainly follows the emphde facto Generative Adversarial Nets (GANs)
arXiv Detail & Related papers (2022-06-30T18:31:51Z) - Motion Estimation for Large Displacements and Deformations [7.99536002595393]
Variational optical flow techniques based on a coarse-to-fine scheme interpolate sparse matches and locally optimize an energy model conditioned on colour, gradient and smoothness.
This paper addresses this problem and presents HybridFlow, a variational motion estimation framework for large displacements and deformations.
arXiv Detail & Related papers (2022-06-24T18:53:22Z) - A new perspective on probabilistic image modeling [92.89846887298852]
We present a new probabilistic approach for image modeling capable of density estimation, sampling and tractable inference.
DCGMMs can be trained end-to-end by SGD from random initial conditions, much like CNNs.
We show that DCGMMs compare favorably to several recent PC and SPN models in terms of inference, classification and sampling.
arXiv Detail & Related papers (2022-03-21T14:53:57Z) - Self-Supervised Graph Representation Learning via Topology
Transformations [61.870882736758624]
We present the Topology Transformation Equivariant Representation learning, a general paradigm of self-supervised learning for node representations of graph data.
In experiments, we apply the proposed model to the downstream node and graph classification tasks, and results show that the proposed method outperforms the state-of-the-art unsupervised approaches.
arXiv Detail & Related papers (2021-05-25T06:11:03Z) - Normalizing Flows with Multi-Scale Autoregressive Priors [131.895570212956]
We introduce channel-wise dependencies in their latent space through multi-scale autoregressive priors (mAR)
Our mAR prior for models with split coupling flow layers (mAR-SCF) can better capture dependencies in complex multimodal data.
We show that mAR-SCF allows for improved image generation quality, with gains in FID and Inception scores compared to state-of-the-art flow-based models.
arXiv Detail & Related papers (2020-04-08T09:07:11Z) - Semi-Supervised Learning with Normalizing Flows [54.376602201489995]
FlowGMM is an end-to-end approach to generative semi supervised learning with normalizing flows.
We show promising results on a wide range of applications, including AG-News and Yahoo Answers text data.
arXiv Detail & Related papers (2019-12-30T17:36:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.