CA-GAN: Weakly Supervised Color Aware GAN for Controllable Makeup
Transfer
- URL: http://arxiv.org/abs/2008.10298v1
- Date: Mon, 24 Aug 2020 10:11:17 GMT
- Title: CA-GAN: Weakly Supervised Color Aware GAN for Controllable Makeup
Transfer
- Authors: Robin Kips, Pietro Gori, Matthieu Perrot, Isabelle Bloch
- Abstract summary: We propose a new formulation for the makeup style transfer task, with the objective to learn a color controllable makeup style synthesis.
We introduce CA-GAN, a generative model that learns to modify the color of specific objects in the image to an arbitrary target color.
We present for the first time a quantitative analysis of makeup style transfer and color control performance.
- Score: 10.086015702323971
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While existing makeup style transfer models perform an image synthesis whose
results cannot be explicitly controlled, the ability to modify makeup color
continuously is a desirable property for virtual try-on applications. We
propose a new formulation for the makeup style transfer task, with the
objective to learn a color controllable makeup style synthesis. We introduce
CA-GAN, a generative model that learns to modify the color of specific objects
(e.g. lips or eyes) in the image to an arbitrary target color while preserving
background. Since color labels are rare and costly to acquire, our method
leverages weakly supervised learning for conditional GANs. This enables to
learn a controllable synthesis of complex objects, and only requires a weak
proxy of the image attribute that we desire to modify. Finally, we present for
the first time a quantitative analysis of makeup style transfer and color
control performance.
Related papers
- IReNe: Instant Recoloring of Neural Radiance Fields [54.94866137102324]
We introduce IReNe, enabling swift, near real-time color editing in NeRF.
We leverage a pre-trained NeRF model and a single training image with user-applied color edits.
This adjustment allows the model to generate new scene views, accurately representing the color changes from the training image.
arXiv Detail & Related papers (2024-05-30T09:30:28Z) - Automatic Controllable Colorization via Imagination [55.489416987587305]
We propose a framework for automatic colorization that allows for iterative editing and modifications.
By understanding the content within a grayscale image, we utilize a pre-trained image generation model to generate multiple images that contain the same content.
These images serve as references for coloring, mimicking the process of human experts.
arXiv Detail & Related papers (2024-04-08T16:46:07Z) - Control Color: Multimodal Diffusion-based Interactive Image Colorization [81.68817300796644]
Control Color (Ctrl Color) is a multi-modal colorization method that leverages the pre-trained Stable Diffusion (SD) model.
We present an effective way to encode user strokes to enable precise local color manipulation.
We also introduce a novel module based on self-attention and a content-guided deformable autoencoder to address the long-standing issues of color overflow and inaccurate coloring.
arXiv Detail & Related papers (2024-02-16T17:51:13Z) - Diffusing Colors: Image Colorization with Text Guided Diffusion [11.727899027933466]
We present a novel image colorization framework that utilizes image diffusion techniques with granular text prompts.
Our method provides a balance between automation and control, outperforming existing techniques in terms of visual quality and semantic coherence.
Our approach holds potential particularly for color enhancement and historical image colorization.
arXiv Detail & Related papers (2023-12-07T08:59:20Z) - Incorporating Ensemble and Transfer Learning For An End-To-End
Auto-Colorized Image Detection Model [0.0]
This paper presents a novel approach that combines the advantages of transfer and ensemble learning approaches to help reduce training time and resource requirements.
The proposed model shows promising results, with accuracy ranging from 94.55% to 99.13%.
arXiv Detail & Related papers (2023-09-25T19:22:57Z) - Dequantization and Color Transfer with Diffusion Models [5.228564799458042]
quantized images offer easy abstraction for patch-based edits and palette transfer.
We show that our model can generate natural images that respect the color palette the user asked for.
Our method can be usefully extended to another practical edit: recoloring patches of an image while respecting the source texture.
arXiv Detail & Related papers (2023-07-06T00:07:32Z) - Towards Vivid and Diverse Image Colorization with Generative Color Prior [17.087464490162073]
Recent deep-learning-based methods could automatically colorize images at a low cost.
We aim at recovering vivid colors by leveraging the rich and diverse color priors encapsulated in a pretrained Generative Adversarial Networks (GAN)
Thanks to the powerful generative color prior and delicate designs, our method could produce vivid colors with a single forward pass.
arXiv Detail & Related papers (2021-08-19T17:49:21Z) - Enjoy Your Editing: Controllable GANs for Image Editing via Latent Space
Navigation [136.53288628437355]
Controllable semantic image editing enables a user to change entire image attributes with few clicks.
Current approaches often suffer from attribute edits that are entangled, global image identity changes, and diminished photo-realism.
We propose quantitative evaluation strategies for measuring controllable editing performance, unlike prior work which primarily focuses on qualitative evaluation.
arXiv Detail & Related papers (2021-02-01T21:38:36Z) - HistoGAN: Controlling Colors of GAN-Generated and Real Images via Color
Histograms [52.77252727786091]
HistoGAN is a color histogram-based method for controlling GAN-generated images' colors.
We show how to expand HistoGAN to recolor real images.
arXiv Detail & Related papers (2020-11-23T21:14:19Z) - Semantic Photo Manipulation with a Generative Image Prior [86.01714863596347]
GANs are able to synthesize images conditioned on inputs such as user sketch, text, or semantic labels.
It is hard for GANs to precisely reproduce an input image.
In this paper, we address these issues by adapting the image prior learned by GANs to image statistics of an individual image.
Our method can accurately reconstruct the input image and synthesize new content, consistent with the appearance of the input image.
arXiv Detail & Related papers (2020-05-15T18:22:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.