Efficient Hair Style Transfer with Generative Adversarial Networks
- URL: http://arxiv.org/abs/2210.12524v1
- Date: Sat, 22 Oct 2022 18:56:16 GMT
- Title: Efficient Hair Style Transfer with Generative Adversarial Networks
- Authors: Muhammed Pektas, Baris Gecer, Aybars Ugur
- Abstract summary: We propose a novel hairstyle transfer method, called EHGAN, which reduces computational costs to enable real-time processing.
To achieve this goal, we train an encoder and a low-resolution generator to transfer hairstyle and then, increase the resolution of results with a pre-trained super-resolution model.
EHGAN needs around 2.7 times and over 10,000 times less time consumption than the state-of-the-art MichiGAN and LOHO methods respectively.
- Score: 7.312180925669325
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Despite the recent success of image generation and style transfer with
Generative Adversarial Networks (GANs), hair synthesis and style transfer
remain challenging due to the shape and style variability of human hair in
in-the-wild conditions. The current state-of-the-art hair synthesis approaches
struggle to maintain global composition of the target style and cannot be used
in real-time applications due to their high running costs on high-resolution
portrait images. Therefore, We propose a novel hairstyle transfer method,
called EHGAN, which reduces computational costs to enable real-time processing
while improving the transfer of hairstyle with better global structure compared
to the other state-of-the-art hair synthesis methods. To achieve this goal, we
train an encoder and a low-resolution generator to transfer hairstyle and then,
increase the resolution of results with a pre-trained super-resolution model.
We utilize Adaptive Instance Normalization (AdaIN) and design our novel Hair
Blending Block (HBB) to obtain the best performance of the generator. EHGAN
needs around 2.7 times and over 10,000 times less time consumption than the
state-of-the-art MichiGAN and LOHO methods respectively while obtaining better
photorealism and structural similarity to the desired style than its
competitors.
Related papers
- HairFastGAN: Realistic and Robust Hair Transfer with a Fast Encoder-Based Approach [3.737361598712633]
We present the HairFast model, which achieves high resolution, near real-time performance, and superior reconstruction.
Our solution includes a new architecture operating in the FS latent space of StyleGAN.
In the most difficult scenario of transferring both shape and color of a hairstyle from different images, our method performs in less than a second on the Nvidia V100.
arXiv Detail & Related papers (2024-04-01T12:59:49Z) - E$^{2}$GAN: Efficient Training of Efficient GANs for Image-to-Image Translation [69.72194342962615]
We introduce and address a novel research direction: can the process of distilling GANs from diffusion models be made significantly more efficient?
First, we construct a base GAN model with generalized features, adaptable to different concepts through fine-tuning, eliminating the need for training from scratch.
Second, we identify crucial layers within the base GAN model and employ Low-Rank Adaptation (LoRA) with a simple yet effective rank search process, rather than fine-tuning the entire base model.
Third, we investigate the minimal amount of data necessary for fine-tuning, further reducing the overall training time.
arXiv Detail & Related papers (2024-01-11T18:59:14Z) - Neural Haircut: Prior-Guided Strand-Based Hair Reconstruction [4.714310894654027]
This work proposes an approach capable of accurate hair geometry reconstruction at a strand level from a monocular video or multi-view images captured in uncontrolled conditions.
The combined system, named Neural Haircut, achieves high realism and personalization of the reconstructed hairstyles.
arXiv Detail & Related papers (2023-06-09T13:08:34Z) - StyleSwap: Style-Based Generator Empowers Robust Face Swapping [90.05775519962303]
We introduce a concise and effective framework named StyleSwap.
Our core idea is to leverage a style-based generator to empower high-fidelity and robust face swapping.
We identify that with only minimal modifications, a StyleGAN2 architecture can successfully handle the desired information from both source and target.
arXiv Detail & Related papers (2022-09-27T16:35:16Z) - Realistic Hair Synthesis with Generative Adversarial Networks [1.8275108630751844]
In this thesis, a generative adversarial network method is proposed to solve the hair synthesis problem.
While developing this method, it is aimed to achieve real-time hair synthesis while achieving visual outputs that compete with the best methods in the literature.
arXiv Detail & Related papers (2022-09-13T11:48:26Z) - Style Your Hair: Latent Optimization for Pose-Invariant Hairstyle
Transfer via Local-Style-Aware Hair Alignment [29.782276472922398]
We propose a pose-invariant hairstyle transfer model equipped with latent optimization and a newly presented local-style-matching loss.
Our model has strengths in transferring a hairstyle under larger pose differences and preserving local hairstyle textures.
arXiv Detail & Related papers (2022-08-16T14:23:54Z) - CCPL: Contrastive Coherence Preserving Loss for Versatile Style Transfer [58.020470877242865]
We devise a universally versatile style transfer method capable of performing artistic, photo-realistic, and video style transfer jointly.
We make a mild and reasonable assumption that global inconsistency is dominated by local inconsistencies and devise a generic Contrastive Coherence Preserving Loss (CCPL) applied to local patches.
CCPL can preserve the coherence of the content source during style transfer without degrading stylization.
arXiv Detail & Related papers (2022-07-11T12:09:41Z) - Drafting and Revision: Laplacian Pyramid Network for Fast High-Quality
Artistic Style Transfer [115.13853805292679]
Artistic style transfer aims at migrating the style from an example image to a content image.
Inspired by the common painting process of drawing a draft and revising the details, we introduce a novel feed-forward method named Laplacian Pyramid Network (LapStyle)
Our method can synthesize high quality stylized images in real time, where holistic style patterns are properly transferred.
arXiv Detail & Related papers (2021-04-12T11:53:53Z) - LOHO: Latent Optimization of Hairstyles via Orthogonalization [20.18175263304822]
We propose an optimization-based approach using GAN inversion to infill missing hair structure details in latent space during hairstyle transfer.
Our approach decomposes hair into three attributes: perceptual structure, appearance, and style, and includes tailored losses to model each of these attributes independently.
arXiv Detail & Related papers (2021-03-05T19:00:33Z) - MichiGAN: Multi-Input-Conditioned Hair Image Generation for Portrait
Editing [122.82964863607938]
MichiGAN is a novel conditional image generation method for interactive portrait hair manipulation.
We provide user control over every major hair visual factor, including shape, structure, appearance, and background.
We also build an interactive portrait hair editing system that enables straightforward manipulation of hair by projecting intuitive and high-level user inputs.
arXiv Detail & Related papers (2020-10-30T17:59:10Z) - Real-time Universal Style Transfer on High-resolution Images via
Zero-channel Pruning [74.09149955786367]
ArtNet can achieve universal, real-time, and high-quality style transfer on high-resolution images simultaneously.
By using ArtNet and S2, our method is 2.3 to 107.4 times faster than state-of-the-art approaches.
arXiv Detail & Related papers (2020-06-16T09:50:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.