HairFIT: Pose-Invariant Hairstyle Transfer via Flow-based Hair Alignment
and Semantic-Region-Aware Inpainting
- URL: http://arxiv.org/abs/2206.08585v1
- Date: Fri, 17 Jun 2022 06:55:20 GMT
- Title: HairFIT: Pose-Invariant Hairstyle Transfer via Flow-based Hair Alignment
and Semantic-Region-Aware Inpainting
- Authors: Chaeyeon Chung, Taewoo Kim, Hyelin Nam, Seunghwan Choi, Gyojung Gu,
Sunghyun Park, Jaegul Choo
- Abstract summary: We propose a novel framework for pose-invariant hairstyle transfer, HairFIT.
Our model consists of two stages: 1) flow-based hair alignment and 2) hair synthesis.
Our SIM estimator divides the occluded regions in the source image into different semantic regions to reflect their distinct features during the inpainting.
- Score: 26.688276902813495
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hairstyle transfer is the task of modifying a source hairstyle to a target
one. Although recent hairstyle transfer models can reflect the delicate
features of hairstyles, they still have two major limitations. First, the
existing methods fail to transfer hairstyles when a source and a target image
have different poses (e.g., viewing direction or face size), which is prevalent
in the real world. Also, the previous models generate unrealistic images when
there is a non-trivial amount of regions in the source image occluded by its
original hair. When modifying long hair to short hair, shoulders or backgrounds
occluded by the long hair need to be inpainted. To address these issues, we
propose a novel framework for pose-invariant hairstyle transfer, HairFIT. Our
model consists of two stages: 1) flow-based hair alignment and 2) hair
synthesis. In the hair alignment stage, we leverage a keypoint-based optical
flow estimator to align a target hairstyle with a source pose. Then, we
generate a final hairstyle-transferred image in the hair synthesis stage based
on Semantic-region-aware Inpainting Mask (SIM) estimator. Our SIM estimator
divides the occluded regions in the source image into different semantic
regions to reflect their distinct features during the inpainting. To
demonstrate the effectiveness of our model, we conduct quantitative and
qualitative evaluations using multi-view datasets, K-hairstyle and VoxCeleb.
The results indicate that HairFIT achieves a state-of-the-art performance by
successfully transferring hairstyles between images of different poses, which
has never been achieved before.
Related papers
- HairDiffusion: Vivid Multi-Colored Hair Editing via Latent Diffusion [43.3852998245422]
We introduce Multi-stage Hairstyle Blend (MHB), effectively separating control of hair color and hairstyle in diffusion latent space.
We also train a warping module to align the hair color with the target region.
Our method not only tackles the complexity of multi-color hairstyles but also addresses the challenge of preserving original colors.
arXiv Detail & Related papers (2024-10-29T06:51:52Z) - What to Preserve and What to Transfer: Faithful, Identity-Preserving Diffusion-based Hairstyle Transfer [35.80645300182437]
Existing hairstyle transfer approaches rely on StyleGAN, which is pre-trained on cropped and aligned face images.
We propose a one-stage hairstyle transfer diffusion model, HairFusion, that applies to real-world scenarios.
Our method achieves state-of-the-art performance compared to the existing methods in preserving the integrity of both the transferred hairstyle and the surrounding features.
arXiv Detail & Related papers (2024-08-29T11:30:21Z) - Stable-Hair: Real-World Hair Transfer via Diffusion Model [23.500330976568296]
Current hair transfer methods struggle to handle diverse and intricate hairstyles, thus limiting their applicability in real-world scenarios.
We propose a novel diffusion-based hair transfer framework, named textitStable-Hair, which robustly transfers a wide range of real-world hairstyles onto user-provided faces for virtual hair try-on.
arXiv Detail & Related papers (2024-07-19T07:14:23Z) - HairFastGAN: Realistic and Robust Hair Transfer with a Fast Encoder-Based Approach [3.737361598712633]
We present the HairFast model, which achieves high resolution, near real-time performance, and superior reconstruction.
Our solution includes a new architecture operating in the FS latent space of StyleGAN.
In the most difficult scenario of transferring both shape and color of a hairstyle from different images, our method performs in less than a second on the Nvidia V100.
arXiv Detail & Related papers (2024-04-01T12:59:49Z) - HAAR: Text-Conditioned Generative Model of 3D Strand-based Human
Hairstyles [85.12672855502517]
We present HAAR, a new strand-based generative model for 3D human hairstyles.
Based on textual inputs, HAAR produces 3D hairstyles that could be used as production-level assets in modern computer graphics engines.
arXiv Detail & Related papers (2023-12-18T19:19:32Z) - Text-Guided Generation and Editing of Compositional 3D Avatars [59.584042376006316]
Our goal is to create a realistic 3D facial avatar with hair and accessories using only a text description.
Existing methods either lack realism, produce unrealistic shapes, or do not support editing.
arXiv Detail & Related papers (2023-09-13T17:59:56Z) - StyleGAN Salon: Multi-View Latent Optimization for Pose-Invariant
Hairstyle Transfer [8.712040236361926]
The paper seeks to transfer the hairstyle of a reference image to an input photo for virtual hair try-on.
We propose a multi-view optimization framework that uses "two different views" of reference composites to semantically guide occluded or ambiguous regions.
Our framework produces high-quality results and outperforms prior work in a user study that consists of significantly more challenging hair transfer scenarios.
arXiv Detail & Related papers (2023-04-05T20:49:55Z) - HairStep: Transfer Synthetic to Real Using Strand and Depth Maps for
Single-View 3D Hair Modeling [55.57803336895614]
We tackle the challenging problem of learning-based single-view 3D hair modeling.
We first propose a novel intermediate representation, termed as HairStep, which consists of a strand map and a depth map.
It is found that HairStep not only provides sufficient information for accurate 3D hair modeling, but also is feasible to be inferred from real images.
arXiv Detail & Related papers (2023-03-05T15:28:13Z) - StyleMask: Disentangling the Style Space of StyleGAN2 for Neural Face
Reenactment [47.27033282706179]
We propose a framework that learns to disentangle the identity characteristics of the face from its pose.
We show that the proposed method produces higher quality results even on extreme pose variations.
arXiv Detail & Related papers (2022-09-27T13:22:35Z) - Style Your Hair: Latent Optimization for Pose-Invariant Hairstyle
Transfer via Local-Style-Aware Hair Alignment [29.782276472922398]
We propose a pose-invariant hairstyle transfer model equipped with latent optimization and a newly presented local-style-matching loss.
Our model has strengths in transferring a hairstyle under larger pose differences and preserving local hairstyle textures.
arXiv Detail & Related papers (2022-08-16T14:23:54Z) - MichiGAN: Multi-Input-Conditioned Hair Image Generation for Portrait
Editing [122.82964863607938]
MichiGAN is a novel conditional image generation method for interactive portrait hair manipulation.
We provide user control over every major hair visual factor, including shape, structure, appearance, and background.
We also build an interactive portrait hair editing system that enables straightforward manipulation of hair by projecting intuitive and high-level user inputs.
arXiv Detail & Related papers (2020-10-30T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.