ARTEMIS: Using GANs with Multiple Discriminators to Generate Art
- URL: http://arxiv.org/abs/2311.08278v1
- Date: Tue, 14 Nov 2023 16:19:29 GMT
- Title: ARTEMIS: Using GANs with Multiple Discriminators to Generate Art
- Authors: James Baker
- Abstract summary: We propose a novel method for generating abstract art.
First an autoencoder is trained to encode and decode the style representations of images, which are extracted from source images with a pretrained VGG network.
The decoder component of the autoencoder is extracted and used as a generator in a GAN.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel method for generating abstract art. First an autoencoder
is trained to encode and decode the style representations of images, which are
extracted from source images with a pretrained VGG network. Then, the decoder
component of the autoencoder is extracted and used as a generator in a GAN. The
generator works with an ensemble of discriminators. Each discriminator takes
different style representations of the same images, and the generator is
trained to create images that create convincing style representations in order
to deceive all of the generators. The generator is also trained to maximize a
diversity term. The resulting images had a surreal, geometric quality. We call
our approach ARTEMIS (ARTistic Encoder- Multi- Discriminators Including
Self-Attention), as it uses the self-attention layers and an encoder-decoder
architecture.
Related papers
- Progressive Energy-Based Cooperative Learning for Multi-Domain
Image-to-Image Translation [53.682651509759744]
We study a novel energy-based cooperative learning framework for multi-domain image-to-image translation.
The framework consists of four components: descriptor, translator, style encoder, and style generator.
arXiv Detail & Related papers (2023-06-26T06:34:53Z) - Towards Accurate Image Coding: Improved Autoregressive Image Generation
with Dynamic Vector Quantization [73.52943587514386]
Existing vector quantization (VQ) based autoregressive models follow a two-stage generation paradigm.
We propose a novel two-stage framework: (1) Dynamic-Quantization VAE (DQ-VAE) which encodes image regions into variable-length codes based their information densities for accurate representation.
arXiv Detail & Related papers (2023-05-19T14:56:05Z) - Generative Steganography Network [37.182458848616754]
We propose an advanced generative steganography network (GSN) that can generate realistic stego images without using cover images.
A module named secret block is designed delicately to conceal secret data in the feature maps during image generation.
arXiv Detail & Related papers (2022-07-28T03:34:37Z) - A Shared Representation for Photorealistic Driving Simulators [83.5985178314263]
We propose to improve the quality of generated images by rethinking the discriminator architecture.
The focus is on the class of problems where images are generated given semantic inputs, such as scene segmentation maps or human body poses.
We aim to learn a shared latent representation that encodes enough information to jointly do semantic segmentation, content reconstruction, along with a coarse-to-fine grained adversarial reasoning.
arXiv Detail & Related papers (2021-12-09T18:59:21Z) - AE-StyleGAN: Improved Training of Style-Based Auto-Encoders [21.51697087024866]
StyleGANs have shown impressive results on data generation and manipulation in recent years.
In this paper, we focus on style-based generators asking a scientific question: Does forcing such a generator to reconstruct real data lead to more disentangled latent space and make the inversion process from image to latent space easy?
We describe a new methodology to train a style-based autoencoder where the encoder and generator are optimized end-to-end.
arXiv Detail & Related papers (2021-10-17T04:25:51Z) - Free-Form Image Inpainting via Contrastive Attention Network [64.05544199212831]
In image inpainting tasks, masks with any shapes can appear anywhere in images which form complex patterns.
It is difficult for encoders to capture such powerful representations under this complex situation.
We propose a self-supervised Siamese inference network to improve the robustness and generalization.
arXiv Detail & Related papers (2020-10-29T14:46:05Z) - Generate High Resolution Images With Generative Variational Autoencoder [0.0]
We present a novel neural network to generate high resolution images.
We replace the decoder of VAE with a discriminator while using the encoder as it is.
We evaluate our network on 3 different datasets: MNIST, LSUN and CelebA dataset.
arXiv Detail & Related papers (2020-08-12T20:15:34Z) - Swapping Autoencoder for Deep Image Manipulation [94.33114146172606]
We propose the Swapping Autoencoder, a deep model designed specifically for image manipulation.
The key idea is to encode an image with two independent components and enforce that any swapped combination maps to a realistic image.
Experiments on multiple datasets show that our model produces better results and is substantially more efficient compared to recent generative models.
arXiv Detail & Related papers (2020-07-01T17:59:57Z) - Adversarial Latent Autoencoders [7.928094304325116]
We introduce an autoencoder that tackles issues jointly, which we call Adversarial Latent Autoencoder (ALAE)
ALAE is the first autoencoder able to compare with, and go beyond the capabilities of a generator-only type of architecture.
arXiv Detail & Related papers (2020-04-09T10:33:44Z) - OneGAN: Simultaneous Unsupervised Learning of Conditional Image
Generation, Foreground Segmentation, and Fine-Grained Clustering [100.32273175423146]
We present a method for simultaneously learning, in an unsupervised manner, a conditional image generator, foreground extraction and segmentation, and object removal and background completion.
The method combines a Geneversarative Adrial Network and a Variational Auto-Encoder, with multiple encoders, generators and discriminators, and benefits from solving all tasks at once.
arXiv Detail & Related papers (2019-12-31T18:15:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.