GANILLA: Generative Adversarial Networks for Image to Illustration
Translation
- URL: http://arxiv.org/abs/2002.05638v2
- Date: Fri, 14 Feb 2020 09:46:35 GMT
- Title: GANILLA: Generative Adversarial Networks for Image to Illustration
Translation
- Authors: Samet Hicsonmez, Nermin Samet, Emre Akbas, Pinar Duygulu
- Abstract summary: We show that although the current state-of-the-art image-to-image translation models successfully transfer either the style or the content, they fail to transfer both at the same time.
We propose a new generator network to address this issue and show that the resulting network strikes a better balance between style and content.
- Score: 12.55972766570669
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we explore illustrations in children's books as a new domain
in unpaired image-to-image translation. We show that although the current
state-of-the-art image-to-image translation models successfully transfer either
the style or the content, they fail to transfer both at the same time. We
propose a new generator network to address this issue and show that the
resulting network strikes a better balance between style and content.
There are no well-defined or agreed-upon evaluation metrics for unpaired
image-to-image translation. So far, the success of image translation models has
been based on subjective, qualitative visual comparison on a limited number of
images. To address this problem, we propose a new framework for the
quantitative evaluation of image-to-illustration models, where both content and
style are taken into account using separate classifiers. In this new evaluation
framework, our proposed model performs better than the current state-of-the-art
models on the illustrations dataset. Our code and pretrained models can be
found at https://github.com/giddyyupp/ganilla.
Related papers
- Image Regeneration: Evaluating Text-to-Image Model via Generating Identical Image with Multimodal Large Language Models [54.052963634384945]
We introduce the Image Regeneration task to assess text-to-image models.
We use GPT4V to bridge the gap between the reference image and the text input for the T2I model.
We also present ImageRepainter framework to enhance the quality of generated images.
arXiv Detail & Related papers (2024-11-14T13:52:43Z) - Unsupervised Image-to-Image Translation with Generative Prior [103.54337984566877]
Unsupervised image-to-image translation aims to learn the translation between two visual domains without paired data.
We present a novel framework, Generative Prior-guided UN Image-to-image Translation (GP-UNIT), to improve the overall quality and applicability of the translation algorithm.
arXiv Detail & Related papers (2022-04-07T17:59:23Z) - Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic [72.60554897161948]
Recent text-to-image matching models apply contrastive learning to large corpora of uncurated pairs of images and sentences.
In this work, we repurpose such models to generate a descriptive text given an image at inference time.
The resulting captions are much less restrictive than those obtained by supervised captioning methods.
arXiv Detail & Related papers (2021-11-29T11:01:49Z) - Fine-Tuning StyleGAN2 For Cartoon Face Generation [0.0]
We propose a novel image-to-image translation method that generates images of the target domain by finetuning a stylegan2 pretrained model.
The stylegan2 model is suitable for unsupervised I2I translation on unbalanced datasets.
arXiv Detail & Related papers (2021-06-22T14:00:10Z) - toon2real: Translating Cartoon Images to Realistic Images [1.4419517737536707]
We apply several state-of-the-art models to perform this task; however, they fail to perform good quality translations.
We propose a method based on CycleGAN model for image translation from cartoon domain to photo-realistic domain.
We demonstrate our experimental results and show that our proposed model has achieved the lowest Frechet Inception Distance score and better results compared to another state-of-the-art technique, UNIT.
arXiv Detail & Related papers (2021-02-01T20:22:05Z) - Unsupervised Image-to-Image Translation via Pre-trained StyleGAN2
Network [73.5062435623908]
We propose a new I2I translation method that generates a new model in the target domain via a series of model transformations.
By feeding the latent vector into the generated model, we can perform I2I translation between the source domain and target domain.
arXiv Detail & Related papers (2020-10-12T13:51:40Z) - Retrieval Guided Unsupervised Multi-domain Image-to-Image Translation [59.73535607392732]
Image to image translation aims to learn a mapping that transforms an image from one visual domain to another.
We propose the use of an image retrieval system to assist the image-to-image translation task.
arXiv Detail & Related papers (2020-08-11T20:11:53Z) - COCO-FUNIT: Few-Shot Unsupervised Image Translation with a Content
Conditioned Style Encoder [70.23358875904891]
Unsupervised image-to-image translation aims to learn a mapping of an image in a given domain to an analogous image in a different domain.
We propose a new few-shot image translation model, COCO-FUNIT, which computes the style embedding of the example images conditioned on the input image.
Our model shows effectiveness in addressing the content loss problem.
arXiv Detail & Related papers (2020-07-15T02:01:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.