Anime-to-Real Clothing: Cosplay Costume Generation via Image-to-Image
Translation
- URL: http://arxiv.org/abs/2008.11479v1
- Date: Wed, 26 Aug 2020 10:34:46 GMT
- Title: Anime-to-Real Clothing: Cosplay Costume Generation via Image-to-Image
Translation
- Authors: Koya Tango, Marie Katsurai, Hayato Maki, Ryosuke Goto
- Abstract summary: This paper presents an automatic costume image generation method based on image-to-image translation.
We present a novel architecture for generative adversarial networks (GANs) to facilitate high-quality cosplay image generation.
Experiments demonstrated that, with two types of evaluation metrics, the proposed GAN achieves better performance than existing methods.
- Score: 2.4660652494309936
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cosplay has grown from its origins at fan conventions into a billion-dollar
global dress phenomenon. To facilitate imagination and reinterpretation from
animated images to real garments, this paper presents an automatic costume
image generation method based on image-to-image translation. Cosplay items can
be significantly diverse in their styles and shapes, and conventional methods
cannot be directly applied to the wide variation in clothing images that are
the focus of this study. To solve this problem, our method starts by collecting
and preprocessing web images to prepare a cleaned, paired dataset of the anime
and real domains. Then, we present a novel architecture for generative
adversarial networks (GANs) to facilitate high-quality cosplay image
generation. Our GAN consists of several effective techniques to fill the gap
between the two domains and improve both the global and local consistency of
generated images. Experiments demonstrated that, with two types of evaluation
metrics, the proposed GAN achieves better performance than existing methods. We
also showed that the images generated by the proposed method are more realistic
than those generated by the conventional methods. Our codes and pretrained
model are available on the web.
Related papers
- FashionR2R: Texture-preserving Rendered-to-Real Image Translation with Diffusion Models [14.596090302381647]
This paper studies photorealism enhancement of rendered images, leveraging generative power from diffusion models on the controlled basis of rendering.
We introduce a novel framework to translate rendered images into their realistic counterparts, which consists of two stages: Domain Knowledge Injection (DKI) and Realistic Image Generation (RIG)
arXiv Detail & Related papers (2024-10-18T12:48:22Z) - Texture-Preserving Diffusion Models for High-Fidelity Virtual Try-On [29.217423805933727]
Diffusion model-based approaches have recently become popular, as they are excellent at image synthesis tasks.
We propose an Texture-Preserving Diffusion (TPD) model for virtual try-on, which enhances the fidelity of the results.
Second, we propose a novel diffusion-based method that predicts a precise inpainting mask based on the person and reference garment images.
arXiv Detail & Related papers (2024-04-01T12:43:22Z) - Improving Diffusion Models for Authentic Virtual Try-on in the Wild [53.96244595495942]
This paper considers image-based virtual try-on, which renders an image of a person wearing a curated garment.
We propose a novel diffusion model that improves garment fidelity and generates authentic virtual try-on images.
We present a customization method using a pair of person-garment images, which significantly improves fidelity and authenticity.
arXiv Detail & Related papers (2024-03-08T08:12:18Z) - Scenimefy: Learning to Craft Anime Scene via Semi-Supervised
Image-to-Image Translation [75.91455714614966]
We propose Scenimefy, a novel semi-supervised image-to-image translation framework.
Our approach guides the learning with structure-consistent pseudo paired data.
A patch-wise contrastive style loss is introduced to improve stylization and fine details.
arXiv Detail & Related papers (2023-08-24T17:59:50Z) - Weakly Supervised High-Fidelity Clothing Model Generation [67.32235668920192]
We propose a cheap yet scalable weakly-supervised method called Deep Generative Projection (DGP) to address this specific scenario.
We show that projecting the rough alignment of clothing and body onto the StyleGAN space can yield photo-realistic wearing results.
arXiv Detail & Related papers (2021-12-14T07:15:15Z) - AniGAN: Style-Guided Generative Adversarial Networks for Unsupervised
Anime Face Generation [84.52819242283852]
We propose a novel framework to translate a portrait photo-face into an anime appearance.
Our aim is to synthesize anime-faces which are style-consistent with a given reference anime-face.
Existing methods often fail to transfer the styles of reference anime-faces, or introduce noticeable artifacts/distortions in the local shapes of their generated faces.
arXiv Detail & Related papers (2021-02-24T22:47:38Z) - Style and Pose Control for Image Synthesis of Humans from a Single
Monocular View [78.6284090004218]
StylePoseGAN is a non-controllable generator to accept conditioning of pose and appearance separately.
Our network can be trained in a fully supervised way with human images to disentangle pose, appearance and body parts.
StylePoseGAN achieves state-of-the-art image generation fidelity on common perceptual metrics.
arXiv Detail & Related papers (2021-02-22T18:50:47Z) - Domain Adaptation for Image Dehazing [72.15994735131835]
Most existing methods train a dehazing model on synthetic hazy images, which are less able to generalize well to real hazy images due to domain shift.
We propose a domain adaptation paradigm, which consists of an image translation module and two image dehazing modules.
Experimental results on both synthetic and real-world images demonstrate that our model performs favorably against the state-of-the-art dehazing algorithms.
arXiv Detail & Related papers (2020-05-10T13:54:56Z) - GarmentGAN: Photo-realistic Adversarial Fashion Transfer [0.0]
GarmentGAN performs image-based garment transfer through generative adversarial methods.
The framework allows users to virtually try-on items before purchase and generalizes to various apparel types.
arXiv Detail & Related papers (2020-03-04T05:01:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.