Unsupervised multi-modal Styled Content Generation
- URL: http://arxiv.org/abs/2001.03640v2
- Date: Mon, 27 Apr 2020 07:14:03 GMT
- Title: Unsupervised multi-modal Styled Content Generation
- Authors: Omry Sendik, Dani Lischinski, Daniel Cohen-Or
- Abstract summary: UMMGAN is a novel architecture designed to better model multi-modal distributions in an unsupervised fashion.
We show that UMMGAN effectively disentangles between modes and style, thereby providing an independent degree of control over the generated content.
- Score: 61.040392094140245
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The emergence of deep generative models has recently enabled the automatic
generation of massive amounts of graphical content, both in 2D and in 3D.
Generative Adversarial Networks (GANs) and style control mechanisms, such as
Adaptive Instance Normalization (AdaIN), have proved particularly effective in
this context, culminating in the state-of-the-art StyleGAN architecture. While
such models are able to learn diverse distributions, provided a sufficiently
large training set, they are not well-suited for scenarios where the
distribution of the training data exhibits a multi-modal behavior. In such
cases, reshaping a uniform or normal distribution over the latent space into a
complex multi-modal distribution in the data domain is challenging, and the
generator might fail to sample the target distribution well. Furthermore,
existing unsupervised generative models are not able to control the mode of the
generated samples independently of the other visual attributes, despite the
fact that they are typically disentangled in the training data.
In this paper, we introduce UMMGAN, a novel architecture designed to better
model multi-modal distributions, in an unsupervised fashion. Building upon the
StyleGAN architecture, our network learns multiple modes, in a completely
unsupervised manner, and combines them using a set of learned weights. We
demonstrate that this approach is capable of effectively approximating a
complex distribution as a superposition of multiple simple ones. We further
show that UMMGAN effectively disentangles between modes and style, thereby
providing an independent degree of control over the generated content.
Related papers
- Aggregation of Multi Diffusion Models for Enhancing Learned Representations [4.126721111013567]
This paper introduces a novel algorithm, Aggregation of Multi Diffusion Models (AMDM)
AMDM synthesizes features from multiple diffusion models into a specified model, enhancing its learned representations to activate specific features for fine-grained control.
Experimental results demonstrate that AMDM significantly improves fine-grained control without additional training or inference time.
arXiv Detail & Related papers (2024-10-02T06:16:06Z) - Diffusion Models For Multi-Modal Generative Modeling [32.61765315067488]
We propose a principled way to define a diffusion model by constructing a unified multi-modal diffusion model in a common diffusion space.
We propose several multimodal generation settings to verify our framework, including image transition, masked-image training, joint image-label and joint image-representation generative modeling.
arXiv Detail & Related papers (2024-07-24T18:04:17Z) - Decentralized Transformers with Centralized Aggregation are Sample-Efficient Multi-Agent World Models [106.94827590977337]
We propose a novel world model for Multi-Agent RL (MARL) that learns decentralized local dynamics for scalability.
We also introduce a Perceiver Transformer as an effective solution to enable centralized representation aggregation.
Results on Starcraft Multi-Agent Challenge (SMAC) show that it outperforms strong model-free approaches and existing model-based methods in both sample efficiency and overall performance.
arXiv Detail & Related papers (2024-06-22T12:40:03Z) - Training Implicit Generative Models via an Invariant Statistical Loss [3.139474253994318]
Implicit generative models have the capability to learn arbitrary complex data distributions.
On the downside, training requires telling apart real data from artificially-generated ones using adversarial discriminators.
We develop a discriminator-free method for training one-dimensional (1D) generative implicit models.
arXiv Detail & Related papers (2024-02-26T09:32:28Z) - Phasic Content Fusing Diffusion Model with Directional Distribution
Consistency for Few-Shot Model Adaption [73.98706049140098]
We propose a novel phasic content fusing few-shot diffusion model with directional distribution consistency loss.
Specifically, we design a phasic training strategy with phasic content fusion to help our model learn content and style information when t is large.
Finally, we propose a cross-domain structure guidance strategy that enhances structure consistency during domain adaptation.
arXiv Detail & Related papers (2023-09-07T14:14:11Z) - Improving Out-of-Distribution Robustness of Classifiers via Generative
Interpolation [56.620403243640396]
Deep neural networks achieve superior performance for learning from independent and identically distributed (i.i.d.) data.
However, their performance deteriorates significantly when handling out-of-distribution (OoD) data.
We develop a simple yet effective method called Generative Interpolation to fuse generative models trained from multiple domains for synthesizing diverse OoD samples.
arXiv Detail & Related papers (2023-07-23T03:53:53Z) - Style-Hallucinated Dual Consistency Learning: A Unified Framework for
Visual Domain Generalization [113.03189252044773]
We propose a unified framework, Style-HAllucinated Dual consistEncy learning (SHADE), to handle domain shift in various visual tasks.
Our versatile SHADE can significantly enhance the generalization in various visual recognition tasks, including image classification, semantic segmentation and object detection.
arXiv Detail & Related papers (2022-12-18T11:42:51Z) - Can Push-forward Generative Models Fit Multimodal Distributions? [3.8615905456206256]
We show that the Lipschitz constant of generative networks has to be large in order to fit multimodal distributions.
We validate our findings on one-dimensional and image datasets and empirically show that generative models consisting of stacked networks with input at each step do not suffer of such limitations.
arXiv Detail & Related papers (2022-06-29T09:03:30Z) - Learning more expressive joint distributions in multimodal variational
methods [0.17188280334580194]
We introduce a method that improves the representational capacity of multimodal variational methods using normalizing flows.
We demonstrate that the model improves on state-of-the-art multimodal methods based on variational inference on various computer vision tasks.
We also show that learning more powerful approximate joint distributions improves the quality of the generated samples.
arXiv Detail & Related papers (2020-09-08T11:45:27Z) - S2RMs: Spatially Structured Recurrent Modules [105.0377129434636]
We take a step towards exploiting dynamic structure that are capable of simultaneously exploiting both modular andtemporal structures.
We find our models to be robust to the number of available views and better capable of generalization to novel tasks without additional training.
arXiv Detail & Related papers (2020-07-13T17:44:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.