Unifying Generative Models with GFlowNets
- URL: http://arxiv.org/abs/2209.02606v1
- Date: Tue, 6 Sep 2022 15:52:51 GMT
- Title: Unifying Generative Models with GFlowNets
- Authors: Dinghuai Zhang, Ricky T. Q. Chen, Nikolay Malkin, Yoshua Bengio
- Abstract summary: We present a short note on the connections between existing deep generative models and the GFlowNet framework, shedding light on their overlapping traits.
This provides a means for unifying training and inference algorithms, and provides a route to construct an agglomeration of generative models.
- Score: 85.38102320953551
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There are many frameworks for deep generative modeling, each often presented
with their own specific training algorithms and inference methods. We present a
short note on the connections between existing deep generative models and the
GFlowNet framework, shedding light on their overlapping traits and providing a
unifying viewpoint through the lens of learning with Markovian trajectories.
This provides a means for unifying training and inference algorithms, and
provides a route to construct an agglomeration of generative models.
Related papers
- JanusFlow: Harmonizing Autoregression and Rectified Flow for Unified Multimodal Understanding and Generation [36.93638123812204]
We present JanusFlow, a powerful framework that unifies image understanding and generation in a single model.
JanusFlow integrates autoregressive language models with rectified flow, a state-of-the-art method in generative modeling.
arXiv Detail & Related papers (2024-11-12T17:55:10Z) - Grounding and Enhancing Grid-based Models for Neural Fields [52.608051828300106]
This paper introduces a theoretical framework for grid-based models.
The framework points out that these models' approximation and generalization behaviors are determined by grid tangent kernels (GTK)
The introduced framework motivates the development of a novel grid-based model named the Multiplicative Fourier Adaptive Grid (MulFAGrid)
arXiv Detail & Related papers (2024-03-29T06:33:13Z) - On the Role of Edge Dependency in Graph Generative Models [28.203109773986167]
We introduce a novel evaluation framework for generative models of graphs.
We focus on the importance of model-generated graph overlap to ensure both accuracy and edge-diversity.
Our results indicate that our simple, interpretable models provide competitive baselines to popular generative models.
arXiv Detail & Related papers (2023-12-06T18:54:27Z) - Reduce, Reuse, Recycle: Compositional Generation with Energy-Based Diffusion Models and MCMC [102.64648158034568]
diffusion models have quickly become the prevailing approach to generative modeling in many domains.
We propose an energy-based parameterization of diffusion models which enables the use of new compositional operators.
We find these samplers lead to notable improvements in compositional generation across a wide set of problems.
arXiv Detail & Related papers (2023-02-22T18:48:46Z) - Internal Representations of Vision Models Through the Lens of Frames on
Data Manifolds [8.67467876089153]
We present a new approach to studying such representations inspired by the idea of a frame on the tangent bundle of a manifold.
Our construction, which we call a neural frame, is formed by assembling a set of vectors representing specific types of perturbations of a data point.
Using neural frames, we make observations about the way that models process, layer-by-layer, specific modes of variation within a small neighborhood of a datapoint.
arXiv Detail & Related papers (2022-11-19T01:48:19Z) - Improving Label Quality by Jointly Modeling Items and Annotators [68.8204255655161]
We propose a fully Bayesian framework for learning ground truth labels from noisy annotators.
Our framework ensures scalability by factoring a generative, Bayesian soft clustering model over label distributions into the classic David and Skene joint annotator-data model.
arXiv Detail & Related papers (2021-06-20T02:15:20Z) - Deep Generative Modelling: A Comparative Review of VAEs, GANs,
Normalizing Flows, Energy-Based and Autoregressive Models [7.477211792460795]
Deep generative modelling is a class of techniques that train deep neural networks to model the distribution of training samples.
This compendium covers energy-based models, variational autoencoders, generative adversarial networks, autoregressive models, normalizing flows.
arXiv Detail & Related papers (2021-03-08T17:34:03Z) - Explanation-Guided Training for Cross-Domain Few-Shot Classification [96.12873073444091]
Cross-domain few-shot classification task (CD-FSC) combines few-shot classification with the requirement to generalize across domains represented by datasets.
We introduce a novel training approach for existing FSC models.
We show that explanation-guided training effectively improves the model generalization.
arXiv Detail & Related papers (2020-07-17T07:28:08Z) - CoSE: Compositional Stroke Embeddings [52.529172734044664]
We present a generative model for complex free-form structures such as stroke-based drawing tasks.
Our approach is suitable for interactive use cases such as auto-completing diagrams.
arXiv Detail & Related papers (2020-06-17T15:22:54Z) - Network Bending: Expressive Manipulation of Deep Generative Models [0.2062593640149624]
We introduce a new framework for manipulating and interacting with deep generative models that we call network bending.
We show how it allows for the direct manipulation of semantically meaningful aspects of the generative process as well as allowing for a broad range of expressive outcomes.
arXiv Detail & Related papers (2020-05-25T21:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.