HairShifter: Consistent and High-Fidelity Video Hair Transfer via Anchor-Guided Animation
- URL: http://arxiv.org/abs/2507.12758v1
- Date: Thu, 17 Jul 2025 03:22:39 GMT
- Title: HairShifter: Consistent and High-Fidelity Video Hair Transfer via Anchor-Guided Animation
- Authors: Wangzheng Shi, Yinglin Zheng, Yuxin Lin, Jianmin Bao, Ming Zeng, Dong Chen,
- Abstract summary: HairShifter is a novel framework that unifies high-quality image hair transfer with smooth and coherent video animation.<n>Our method maintains hairstyle fidelity across frames while preserving non-hair regions.<n>HairShifter achieves state-of-the-art performance in video hairstyle transfer, combining superior visual quality, temporal consistency, and scalability.
- Score: 29.404225620335193
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hair transfer is increasingly valuable across domains such as social media, gaming, advertising, and entertainment. While significant progress has been made in single-image hair transfer, video-based hair transfer remains challenging due to the need for temporal consistency, spatial fidelity, and dynamic adaptability. In this work, we propose HairShifter, a novel "Anchor Frame + Animation" framework that unifies high-quality image hair transfer with smooth and coherent video animation. At its core, HairShifter integrates a Image Hair Transfer (IHT) module for precise per-frame transformation and a Multi-Scale Gated SPADE Decoder to ensure seamless spatial blending and temporal coherence. Our method maintains hairstyle fidelity across frames while preserving non-hair regions. Extensive experiments demonstrate that HairShifter achieves state-of-the-art performance in video hairstyle transfer, combining superior visual quality, temporal consistency, and scalability. The code will be publicly available. We believe this work will open new avenues for video-based hairstyle transfer and establish a robust baseline in this field.
Related papers
- What to Preserve and What to Transfer: Faithful, Identity-Preserving Diffusion-based Hairstyle Transfer [35.80645300182437]
Existing hairstyle transfer approaches rely on StyleGAN.<n>We propose a one-stage hairstyle transfer diffusion model, HairFusion, that applies to real-world scenarios.<n>Our method achieves state-of-the-art performance compared to the existing methods in preserving the integrity of both the transferred hairstyle and the surrounding features.
arXiv Detail & Related papers (2024-08-29T11:30:21Z) - Stable-Hair: Real-World Hair Transfer via Diffusion Model [26.880396643803998]
Current hair transfer methods struggle to handle diverse and intricate hairstyles, limiting their applicability in real-world scenarios.<n>We propose a novel diffusion-based hair transfer framework, named textitStable-Hair, which robustly transfers a wide range of real-world hairstyles to user-provided faces for virtual hair try-on.
arXiv Detail & Related papers (2024-07-19T07:14:23Z) - Zero-shot High-fidelity and Pose-controllable Character Animation [89.74818983864832]
Image-to-video (I2V) generation aims to create a video sequence from a single image.
Existing approaches suffer from inconsistency of character appearances and poor preservation of fine details.
We propose PoseAnimate, a novel zero-shot I2V framework for character animation.
arXiv Detail & Related papers (2024-04-21T14:43:31Z) - HairFastGAN: Realistic and Robust Hair Transfer with a Fast Encoder-Based Approach [3.737361598712633]
We present the HairFast model, which achieves high resolution, near real-time performance, and superior reconstruction.
Our solution includes a new architecture operating in the FS latent space of StyleGAN.
In the most difficult scenario of transferring both shape and color of a hairstyle from different images, our method performs in less than a second on the Nvidia V100.
arXiv Detail & Related papers (2024-04-01T12:59:49Z) - MagicAnimate: Temporally Consistent Human Image Animation using
Diffusion Model [74.84435399451573]
This paper studies the human image animation task, which aims to generate a video of a certain reference identity following a particular motion sequence.
Existing animation works typically employ the frame-warping technique to animate the reference image towards the target motion.
We introduce MagicAnimate, a diffusion-based framework that aims at enhancing temporal consistency, preserving reference image faithfully, and improving animation fidelity.
arXiv Detail & Related papers (2023-11-27T18:32:31Z) - DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors [63.43133768897087]
We propose a method to convert open-domain images into animated videos.
The key idea is to utilize the motion prior to text-to-video diffusion models by incorporating the image into the generative process as guidance.
Our proposed method can produce visually convincing and more logical & natural motions, as well as higher conformity to the input image.
arXiv Detail & Related papers (2023-10-18T14:42:16Z) - Automatic Animation of Hair Blowing in Still Portrait Photos [61.54919805051212]
We propose a novel approach to animate human hair in a still portrait photo.
Considering the complexity of hair structure, we innovatively treat hair wisp extraction as an instance segmentation problem.
We propose a wisp-aware animation module that animates hair wisps with pleasing motions without noticeable artifacts.
arXiv Detail & Related papers (2023-09-25T15:11:40Z) - NeuWigs: A Neural Dynamic Model for Volumetric Hair Capture and
Animation [23.625243364572867]
The capture and animation of human hair are two of the major challenges in the creation of realistic avatars for the virtual reality.
We present a two-stage approach that models hair independently from the head to address these challenges in a data-driven manner.
Our model outperforms the state of the art in novel view synthesis and is capable of creating novel hair animations without having to rely on hair observations as a driving signal.
arXiv Detail & Related papers (2022-12-01T16:09:54Z) - Style Your Hair: Latent Optimization for Pose-Invariant Hairstyle
Transfer via Local-Style-Aware Hair Alignment [29.782276472922398]
We propose a pose-invariant hairstyle transfer model equipped with latent optimization and a newly presented local-style-matching loss.
Our model has strengths in transferring a hairstyle under larger pose differences and preserving local hairstyle textures.
arXiv Detail & Related papers (2022-08-16T14:23:54Z) - MichiGAN: Multi-Input-Conditioned Hair Image Generation for Portrait
Editing [122.82964863607938]
MichiGAN is a novel conditional image generation method for interactive portrait hair manipulation.
We provide user control over every major hair visual factor, including shape, structure, appearance, and background.
We also build an interactive portrait hair editing system that enables straightforward manipulation of hair by projecting intuitive and high-level user inputs.
arXiv Detail & Related papers (2020-10-30T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.