Deep Generative Modelling: A Comparative Review of VAEs, GANs,
Normalizing Flows, Energy-Based and Autoregressive Models
- URL: http://arxiv.org/abs/2103.04922v1
- Date: Mon, 8 Mar 2021 17:34:03 GMT
- Title: Deep Generative Modelling: A Comparative Review of VAEs, GANs,
Normalizing Flows, Energy-Based and Autoregressive Models
- Authors: Sam Bond-Taylor, Adam Leach, Yang Long, Chris G. Willcocks
- Abstract summary: Deep generative modelling is a class of techniques that train deep neural networks to model the distribution of training samples.
This compendium covers energy-based models, variational autoencoders, generative adversarial networks, autoregressive models, normalizing flows.
- Score: 7.477211792460795
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep generative modelling is a class of techniques that train deep neural
networks to model the distribution of training samples. Research has fragmented
into various interconnected approaches, each of which making trade-offs
including run-time, diversity, and architectural restrictions. In particular,
this compendium covers energy-based models, variational autoencoders,
generative adversarial networks, autoregressive models, normalizing flows, in
addition to numerous hybrid approaches. These techniques are drawn under a
single cohesive framework, comparing and contrasting to explain the premises
behind each, while reviewing current state-of-the-art advances and
implementations.
Related papers
- Learnable & Interpretable Model Combination in Dynamic Systems Modeling [0.0]
We discuss which types of models are usually combined and propose a model interface that is capable of expressing a variety of mixed equation based models.
We propose a new wildcard topology, that is capable of describing the generic connection between two combined models in an easy to interpret fashion.
The contributions of this paper are highlighted at a proof of concept: Different connection topologies between two models are learned, interpreted and compared.
arXiv Detail & Related papers (2024-06-12T11:17:11Z) - FusionBench: A Comprehensive Benchmark of Deep Model Fusion [78.80920533793595]
Deep model fusion is a technique that unifies the predictions or parameters of several deep neural networks into a single model.
FusionBench is the first comprehensive benchmark dedicated to deep model fusion.
arXiv Detail & Related papers (2024-06-05T13:54:28Z) - Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models [54.132297393662654]
We introduce a hybrid method that fine-tunes cutting-edge diffusion models by optimizing reward models through RL.
We demonstrate the capability of our approach to outperform the best designs in offline data, leveraging the extrapolation capabilities of reward models.
arXiv Detail & Related papers (2024-05-30T03:57:29Z) - Model Synthesis for Zero-Shot Model Attribution [26.835046772924258]
generative models are shaping various fields such as art, design, and human-computer interaction.
We propose a model synthesis technique, which generates numerous synthetic models mimicking the fingerprint patterns of real-world generative models.
Our experiments demonstrate that this fingerprint extractor, trained solely on synthetic models, achieves impressive zero-shot generalization on a wide range of real-world generative models.
arXiv Detail & Related papers (2023-07-29T13:00:42Z) - A Survey on Generative Diffusion Model [75.93774014861978]
Diffusion models are an emerging class of deep generative models.
They have certain limitations, including a time-consuming iterative generation process and confinement to high-dimensional Euclidean space.
This survey presents a plethora of advanced techniques aimed at enhancing diffusion models.
arXiv Detail & Related papers (2022-09-06T16:56:21Z) - Unifying Generative Models with GFlowNets [85.38102320953551]
We present a short note on the connections between existing deep generative models and the GFlowNet framework, shedding light on their overlapping traits.
This provides a means for unifying training and inference algorithms, and provides a route to construct an agglomeration of generative models.
arXiv Detail & Related papers (2022-09-06T15:52:51Z) - Diffusion Models: A Comprehensive Survey of Methods and Applications [10.557289965753437]
Diffusion models are a class of deep generative models that have shown impressive results on various tasks with dense theoretical founding.
Recent studies have shown great enthusiasm on improving the performance of diffusion model.
arXiv Detail & Related papers (2022-09-02T02:59:10Z) - Sequential Bayesian Neural Subnetwork Ensembles [4.6354120722975125]
We propose an approach for sequential ensembling of dynamic Bayesian neuralworks that consistently maintains reduced model complexity throughout the training process.
Our proposed approach outperforms traditional dense and sparse deterministic and Bayesian ensemble models in terms of prediction accuracy, uncertainty estimation, out-of-distribution detection, and adversarial robustness.
arXiv Detail & Related papers (2022-06-01T22:57:52Z) - Deep Variational Models for Collaborative Filtering-based Recommender
Systems [63.995130144110156]
Deep learning provides accurate collaborative filtering models to improve recommender system results.
Our proposed models apply the variational concept to injectity in the latent space of the deep architecture.
Results show the superiority of the proposed approach in scenarios where the variational enrichment exceeds the injected noise effect.
arXiv Detail & Related papers (2021-07-27T08:59:39Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.