toon2real: Translating Cartoon Images to Realistic Images
- URL: http://arxiv.org/abs/2102.01143v1
- Date: Mon, 1 Feb 2021 20:22:05 GMT
- Title: toon2real: Translating Cartoon Images to Realistic Images
- Authors: K. M. Arefeen Sultan, Mohammad Imrul Jubair, MD. Nahidul Islam, Sayed
Hossain Khan
- Abstract summary: We apply several state-of-the-art models to perform this task; however, they fail to perform good quality translations.
We propose a method based on CycleGAN model for image translation from cartoon domain to photo-realistic domain.
We demonstrate our experimental results and show that our proposed model has achieved the lowest Frechet Inception Distance score and better results compared to another state-of-the-art technique, UNIT.
- Score: 1.4419517737536707
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In terms of Image-to-image translation, Generative Adversarial Networks
(GANs) has achieved great success even when it is used in the unsupervised
dataset. In this work, we aim to translate cartoon images to photo-realistic
images using GAN. We apply several state-of-the-art models to perform this
task; however, they fail to perform good quality translations. We observe that
the shallow difference between these two domains causes this issue. Based on
this idea, we propose a method based on CycleGAN model for image translation
from cartoon domain to photo-realistic domain. To make our model efficient, we
implemented Spectral Normalization which added stability in our model. We
demonstrate our experimental results and show that our proposed model has
achieved the lowest Frechet Inception Distance score and better results
compared to another state-of-the-art technique, UNIT.
Related papers
- SCONE-GAN: Semantic Contrastive learning-based Generative Adversarial
Network for an end-to-end image translation [18.93434486338439]
SCONE-GAN is shown to be effective for learning to generate realistic and diverse scenery images.
For more realistic and diverse image generation we introduce style reference image.
We validate the proposed algorithm for image-to-image translation and stylizing outdoor images.
arXiv Detail & Related papers (2023-11-07T10:29:16Z) - Improving Diffusion-based Image Translation using Asymmetric Gradient
Guidance [51.188396199083336]
We present an approach that guides the reverse process of diffusion sampling by applying asymmetric gradient guidance.
Our model's adaptability allows it to be implemented with both image-fusion and latent-dif models.
Experiments show that our method outperforms various state-of-the-art models in image translation tasks.
arXiv Detail & Related papers (2023-06-07T12:56:56Z) - NeRFInvertor: High Fidelity NeRF-GAN Inversion for Single-shot Real
Image Animation [66.0838349951456]
Nerf-based Generative models have shown impressive capacity in generating high-quality images with consistent 3D geometry.
We propose a universal method to surgically fine-tune these NeRF-GAN models in order to achieve high-fidelity animation of real subjects only by a single image.
arXiv Detail & Related papers (2022-11-30T18:36:45Z) - InvGAN: Invertible GANs [88.58338626299837]
InvGAN, short for Invertible GAN, successfully embeds real images to the latent space of a high quality generative model.
This allows us to perform image inpainting, merging, and online data augmentation.
arXiv Detail & Related papers (2021-12-08T21:39:00Z) - Fine-Tuning StyleGAN2 For Cartoon Face Generation [0.0]
We propose a novel image-to-image translation method that generates images of the target domain by finetuning a stylegan2 pretrained model.
The stylegan2 model is suitable for unsupervised I2I translation on unbalanced datasets.
arXiv Detail & Related papers (2021-06-22T14:00:10Z) - cGANs for Cartoon to Real-life Images [0.4724825031148411]
The project aims to evaluate the robustness of the Pix2Pix model by applying it to datasets consisting of cartoonized images.
It should be possible to train the network to generate real-life images from the cartoonized images.
arXiv Detail & Related papers (2021-01-24T20:26:31Z) - Unsupervised Image-to-Image Translation via Pre-trained StyleGAN2
Network [73.5062435623908]
We propose a new I2I translation method that generates a new model in the target domain via a series of model transformations.
By feeding the latent vector into the generated model, we can perform I2I translation between the source domain and target domain.
arXiv Detail & Related papers (2020-10-12T13:51:40Z) - COCO-FUNIT: Few-Shot Unsupervised Image Translation with a Content
Conditioned Style Encoder [70.23358875904891]
Unsupervised image-to-image translation aims to learn a mapping of an image in a given domain to an analogous image in a different domain.
We propose a new few-shot image translation model, COCO-FUNIT, which computes the style embedding of the example images conditioned on the input image.
Our model shows effectiveness in addressing the content loss problem.
arXiv Detail & Related papers (2020-07-15T02:01:14Z) - Domain Adaptation for Image Dehazing [72.15994735131835]
Most existing methods train a dehazing model on synthetic hazy images, which are less able to generalize well to real hazy images due to domain shift.
We propose a domain adaptation paradigm, which consists of an image translation module and two image dehazing modules.
Experimental results on both synthetic and real-world images demonstrate that our model performs favorably against the state-of-the-art dehazing algorithms.
arXiv Detail & Related papers (2020-05-10T13:54:56Z) - GANILLA: Generative Adversarial Networks for Image to Illustration
Translation [12.55972766570669]
We show that although the current state-of-the-art image-to-image translation models successfully transfer either the style or the content, they fail to transfer both at the same time.
We propose a new generator network to address this issue and show that the resulting network strikes a better balance between style and content.
arXiv Detail & Related papers (2020-02-13T17:12:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.