GHOST 2.0: generative high-fidelity one shot transfer of heads
- URL: http://arxiv.org/abs/2502.18417v3
- Date: Thu, 27 Feb 2025 11:45:45 GMT
- Title: GHOST 2.0: generative high-fidelity one shot transfer of heads
- Authors: Alexander Groshev, Anastasiia Iashchenko, Pavel Paramonov, Denis Dimitrov, Andrey Kuznetsov,
- Abstract summary: Head swapping poses extra challenges, such as the need to preserve structural information of the whole head during synthesis and inpaint gaps between swapped head and background.<n>In this paper, we address these concerns with GHOST 2.0, which consists of two problem-specific modules.<n>First, we introduce enhanced Aligner model for head reenactment, which preserves identity information at multiple scales.<n> Secondly, we use a Blender module that seamlessly integrates the reenacted head into the target background by transferring skin color and inpainting mismatched regions.
- Score: 41.59152401245996
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While the task of face swapping has recently gained attention in the research community, a related problem of head swapping remains largely unexplored. In addition to skin color transfer, head swap poses extra challenges, such as the need to preserve structural information of the whole head during synthesis and inpaint gaps between swapped head and background. In this paper, we address these concerns with GHOST 2.0, which consists of two problem-specific modules. First, we introduce enhanced Aligner model for head reenactment, which preserves identity information at multiple scales and is robust to extreme pose variations. Secondly, we use a Blender module that seamlessly integrates the reenacted head into the target background by transferring skin color and inpainting mismatched regions. Both modules outperform the baselines on the corresponding tasks, allowing to achieve state of the art results in head swapping. We also tackle complex cases, such as large difference in hair styles of source and target. Code is available at https://github.com/ai-forever/ghost-2.0
Related papers
- Zero-Shot Head Swapping in Real-World Scenarios [30.493743596793212]
We propose a novel head swapping method, HID, that is robust to images including the full head and the upper body.
For automatic mask generation, we introduce the IOMask, which enables seamless blending of the head and body.
Our experiments demonstrate that the proposed approach achieves state-of-the-art performance in head swapping.
arXiv Detail & Related papers (2025-03-02T11:44:23Z) - Realistic and Efficient Face Swapping: A Unified Approach with Diffusion Models [69.50286698375386]
We propose a novel approach that better harnesses diffusion models for face-swapping.
We introduce a mask shuffling technique during inpainting training, which allows us to create a so-called universal model for swapping.
Ours is a relatively unified approach and so it is resilient to errors in other off-the-shelf models.
arXiv Detail & Related papers (2024-09-11T13:43:53Z) - MeGA: Hybrid Mesh-Gaussian Head Avatar for High-Fidelity Rendering and Head Editing [34.31657241047574]
We propose a Hybrid Mesh-Gaussian Head Avatar (MeGA) that models different head components with more suitable representations.
MeGA generates higher-fidelity renderings for the whole head and naturally supports more downstream tasks.
Experiments on the NeRSemble dataset demonstrate the effectiveness of our designs.
arXiv Detail & Related papers (2024-04-29T18:10:12Z) - AniPortraitGAN: Animatable 3D Portrait Generation from 2D Image
Collections [78.81539337399391]
We present an animatable 3D-aware GAN that generates portrait images with controllable facial expression, head pose, and shoulder movements.
It is a generative model trained on unstructured 2D image collections without using 3D or video data.
A dual-camera rendering and adversarial learning scheme is proposed to improve the quality of the generated faces.
arXiv Detail & Related papers (2023-09-05T12:44:57Z) - HS-Diffusion: Semantic-Mixing Diffusion for Head Swapping [150.06405071177048]
We propose a semantic-mixing diffusion model for head swapping (HS-Diffusion)
We blend the semantic layouts of source head and source body, and then inpaint the transition region by the semantic layout generator.
We construct a new image-based head swapping benchmark and design two tailor-designed metrics.
arXiv Detail & Related papers (2022-12-13T10:04:01Z) - Learning to regulate 3D head shape by removing occluding hair from
in-the-wild images [0.0]
We present a novel approach for modeling the upper head by removing occluding hair and reconstructing the skin.
Our unsupervised 3DMM model achieves state-of-the-art results on popular benchmarks.
arXiv Detail & Related papers (2022-08-25T13:18:26Z) - Few-Shot Head Swapping in the Wild [79.78228139171574]
The head swapping task aims at flawlessly placing a source head onto a target body, which is of great importance to various entertainment scenarios.
It is inherently challenging due to its unique needs in head modeling and background blending.
We present the Head Swapper (HeSer), which achieves few-shot head swapping in the wild through two delicately designed modules.
arXiv Detail & Related papers (2022-04-27T17:52:51Z) - Head2HeadFS: Video-based Head Reenactment with Few-shot Learning [64.46913473391274]
Head reenactment is a challenging task, which aims at transferring the entire head pose from a source person to a target.
We propose head2headFS, a novel easily adaptable pipeline for head reenactment.
Our video-based rendering network is fine-tuned under a few-shot learning strategy, using only a few samples.
arXiv Detail & Related papers (2021-03-30T10:19:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.