Style Your Hair: Latent Optimization for Pose-Invariant Hairstyle
Transfer via Local-Style-Aware Hair Alignment
- URL: http://arxiv.org/abs/2208.07765v1
- Date: Tue, 16 Aug 2022 14:23:54 GMT
- Title: Style Your Hair: Latent Optimization for Pose-Invariant Hairstyle
Transfer via Local-Style-Aware Hair Alignment
- Authors: Taewoo Kim, Chaeyeon Chung, Yoonseo Kim, Sunghyun Park, Kangyeol Kim,
Jaegul Choo
- Abstract summary: We propose a pose-invariant hairstyle transfer model equipped with latent optimization and a newly presented local-style-matching loss.
Our model has strengths in transferring a hairstyle under larger pose differences and preserving local hairstyle textures.
- Score: 29.782276472922398
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Editing hairstyle is unique and challenging due to the complexity and
delicacy of hairstyle. Although recent approaches significantly improved the
hair details, these models often produce undesirable outputs when a pose of a
source image is considerably different from that of a target hair image,
limiting their real-world applications. HairFIT, a pose-invariant hairstyle
transfer model, alleviates this limitation yet still shows unsatisfactory
quality in preserving delicate hair textures. To solve these limitations, we
propose a high-performing pose-invariant hairstyle transfer model equipped with
latent optimization and a newly presented local-style-matching loss. In the
StyleGAN2 latent space, we first explore a pose-aligned latent code of a target
hair with the detailed textures preserved based on local style matching. Then,
our model inpaints the occlusions of the source considering the aligned target
hair and blends both images to produce a final output. The experimental results
demonstrate that our model has strengths in transferring a hairstyle under
larger pose differences and preserving local hairstyle textures.
Related papers
- What to Preserve and What to Transfer: Faithful, Identity-Preserving Diffusion-based Hairstyle Transfer [35.80645300182437]
Existing hairstyle transfer approaches rely on StyleGAN, which is pre-trained on cropped and aligned face images.
We propose a one-stage hairstyle transfer diffusion model, HairFusion, that applies to real-world scenarios.
Our method achieves state-of-the-art performance compared to the existing methods in preserving the integrity of both the transferred hairstyle and the surrounding features.
arXiv Detail & Related papers (2024-08-29T11:30:21Z) - Stable-Hair: Real-World Hair Transfer via Diffusion Model [23.500330976568296]
Current hair transfer methods struggle to handle diverse and intricate hairstyles, thus limiting their applicability in real-world scenarios.
We propose a novel diffusion-based hair transfer framework, named textitStable-Hair, which robustly transfers a wide range of real-world hairstyles onto user-provided faces for virtual hair try-on.
arXiv Detail & Related papers (2024-07-19T07:14:23Z) - HairFastGAN: Realistic and Robust Hair Transfer with a Fast Encoder-Based Approach [3.737361598712633]
We present the HairFast model, which achieves high resolution, near real-time performance, and superior reconstruction.
Our solution includes a new architecture operating in the FS latent space of StyleGAN.
In the most difficult scenario of transferring both shape and color of a hairstyle from different images, our method performs in less than a second on the Nvidia V100.
arXiv Detail & Related papers (2024-04-01T12:59:49Z) - HAAR: Text-Conditioned Generative Model of 3D Strand-based Human
Hairstyles [85.12672855502517]
We present HAAR, a new strand-based generative model for 3D human hairstyles.
Based on textual inputs, HAAR produces 3D hairstyles that could be used as production-level assets in modern computer graphics engines.
arXiv Detail & Related papers (2023-12-18T19:19:32Z) - StyleGAN Salon: Multi-View Latent Optimization for Pose-Invariant
Hairstyle Transfer [8.712040236361926]
The paper seeks to transfer the hairstyle of a reference image to an input photo for virtual hair try-on.
We propose a multi-view optimization framework that uses "two different views" of reference composites to semantically guide occluded or ambiguous regions.
Our framework produces high-quality results and outperforms prior work in a user study that consists of significantly more challenging hair transfer scenarios.
arXiv Detail & Related papers (2023-04-05T20:49:55Z) - HairStep: Transfer Synthetic to Real Using Strand and Depth Maps for
Single-View 3D Hair Modeling [55.57803336895614]
We tackle the challenging problem of learning-based single-view 3D hair modeling.
We first propose a novel intermediate representation, termed as HairStep, which consists of a strand map and a depth map.
It is found that HairStep not only provides sufficient information for accurate 3D hair modeling, but also is feasible to be inferred from real images.
arXiv Detail & Related papers (2023-03-05T15:28:13Z) - HairFIT: Pose-Invariant Hairstyle Transfer via Flow-based Hair Alignment
and Semantic-Region-Aware Inpainting [26.688276902813495]
We propose a novel framework for pose-invariant hairstyle transfer, HairFIT.
Our model consists of two stages: 1) flow-based hair alignment and 2) hair synthesis.
Our SIM estimator divides the occluded regions in the source image into different semantic regions to reflect their distinct features during the inpainting.
arXiv Detail & Related papers (2022-06-17T06:55:20Z) - Pastiche Master: Exemplar-Based High-Resolution Portrait Style Transfer [103.54337984566877]
Recent studies on StyleGAN show high performance on artistic portrait generation by transfer learning with limited data.
We introduce a novel DualStyleGAN with flexible control of dual styles of the original face domain and the extended artistic portrait domain.
Experiments demonstrate the superiority of DualStyleGAN over state-of-the-art methods in high-quality portrait style transfer and flexible style control.
arXiv Detail & Related papers (2022-03-24T17:57:11Z) - Controllable Person Image Synthesis with Spatially-Adaptive Warped
Normalization [72.65828901909708]
Controllable person image generation aims to produce realistic human images with desirable attributes.
We introduce a novel Spatially-Adaptive Warped Normalization (SAWN), which integrates a learned flow-field to warp modulation parameters.
We propose a novel self-training part replacement strategy to refine the pretrained model for the texture-transfer task.
arXiv Detail & Related papers (2021-05-31T07:07:44Z) - Learning Semantic Person Image Generation by Region-Adaptive
Normalization [81.52223606284443]
We propose a new two-stage framework to handle the pose and appearance translation.
In the first stage, we predict the target semantic parsing maps to eliminate the difficulties of pose transfer.
In the second stage, we suggest a new person image generation method by incorporating the region-adaptive normalization.
arXiv Detail & Related papers (2021-04-14T06:51:37Z) - MichiGAN: Multi-Input-Conditioned Hair Image Generation for Portrait
Editing [122.82964863607938]
MichiGAN is a novel conditional image generation method for interactive portrait hair manipulation.
We provide user control over every major hair visual factor, including shape, structure, appearance, and background.
We also build an interactive portrait hair editing system that enables straightforward manipulation of hair by projecting intuitive and high-level user inputs.
arXiv Detail & Related papers (2020-10-30T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.