Graph-based Generative Face Anonymisation with Pose Preservation
- URL: http://arxiv.org/abs/2112.05496v1
- Date: Fri, 10 Dec 2021 12:58:17 GMT
- Title: Graph-based Generative Face Anonymisation with Pose Preservation
- Authors: Nicola Dall'Asen, Yiming Wang, Hao Tang, Luca Zanella and Elisa Ricci
- Abstract summary: AnonyGAN is a GAN-based solution for face anonymisation.
It replaces the visual information corresponding to a source identity with a condition identity provided as any single image.
- Score: 49.18049578591058
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose AnonyGAN, a GAN-based solution for face anonymisation which
replaces the visual information corresponding to a source identity with a
condition identity provided as any single image. With the goal to maintain the
geometric attributes of the source face, i.e., the facial pose and expression,
and to promote more natural face generation, we propose to exploit a Bipartite
Graph to explicitly model the relations between the facial landmarks of the
source identity and the ones of the condition identity through a deep model. We
further propose a landmark attention model to relax the manual selection of
facial landmarks, allowing the network to weight the landmarks for the best
visual naturalness and pose preservation. Finally, to facilitate the appearance
learning, we propose a hybrid training strategy to address the challenge caused
by the lack of direct pixel-level supervision. We evaluate our method and its
variants on two public datasets, CelebA and LFW, in terms of visual
naturalness, facial pose preservation and of its impacts on face detection and
re-identification. We prove that AnonyGAN significantly outperforms the
state-of-the-art methods in terms of visual naturalness, face detection and
pose preservation.
Related papers
- OSDFace: One-Step Diffusion Model for Face Restoration [72.5045389847792]
Diffusion models have demonstrated impressive performance in face restoration.
We propose OSDFace, a novel one-step diffusion model for face restoration.
Results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics.
arXiv Detail & Related papers (2024-11-26T07:07:48Z) - AniFaceDiff: High-Fidelity Face Reenactment via Facial Parametric Conditioned Diffusion Models [33.39336530229545]
Face reenactment refers to the process of transferring the pose and facial expressions from a reference (driving) video onto a static facial (source) image.
Previous research in this domain has made significant progress by training controllable deep generative models to generate faces.
This paper proposes a new method based on Stable Diffusion, called AniFaceDiff, incorporating a new conditioning module for high-fidelity face reenactment.
arXiv Detail & Related papers (2024-06-19T07:08:48Z) - Attribute-preserving Face Dataset Anonymization via Latent Code
Optimization [64.4569739006591]
We present a task-agnostic anonymization procedure that directly optimize the images' latent representation in the latent space of a pre-trained GAN.
We demonstrate through a series of experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes.
arXiv Detail & Related papers (2023-03-20T17:34:05Z) - Finding Directions in GAN's Latent Space for Neural Face Reenactment [45.67273942952348]
This paper is on face/head reenactment where the goal is to transfer the facial pose (3D head orientation and expression) of a target face to a source face.
We take a different approach, bypassing the training of such networks, by using (fine-tuned) pre-trained GANs.
We show that by embedding real images in the GAN latent space, our method can be successfully used for the reenactment of real-world faces.
arXiv Detail & Related papers (2022-01-31T19:14:03Z) - Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo
Collection [65.92058628082322]
Non-parametric face modeling aims to reconstruct 3D face only from images without shape assumptions.
This paper presents a novel Learning to Aggregate and Personalize framework for unsupervised robust 3D face modeling.
arXiv Detail & Related papers (2021-06-15T03:10:17Z) - Face Anonymization by Manipulating Decoupled Identity Representation [5.26916168336451]
We propose a novel approach which protects identity information of facial images from leakage with slightest modification.
Specifically, we disentangle identity representation from other facial attributes leveraging the power of generative adversarial networks.
We evaulate the disentangle ability of our model, and propose an effective method for identity anonymization, namely Anonymous Identity Generation (AIG)
arXiv Detail & Related papers (2021-05-24T07:39:54Z) - FaR-GAN for One-Shot Face Reenactment [20.894596219099164]
We present a one-shot face reenactment model, FaR-GAN, that takes only one face image of any given source identity and a target expression as input.
The proposed method makes no assumptions about the source identity, facial expression, head pose, or even image background.
arXiv Detail & Related papers (2020-05-13T16:15:37Z) - VAE/WGAN-Based Image Representation Learning For Pose-Preserving
Seamless Identity Replacement In Facial Images [15.855376604558977]
We present a novel variational generative adversarial network (VGAN) based on Wasserstein loss.
We show that our network can be used to perform pose-preserving identity morphing and identity-preserving pose morphing.
arXiv Detail & Related papers (2020-03-02T03:35:59Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z) - Dual-Attention GAN for Large-Pose Face Frontalization [59.689836951934694]
We present a novel Dual-Attention Generative Adversarial Network (DA-GAN) for photo-realistic face frontalization.
Specifically, a self-attention-based generator is introduced to integrate local features with their long-range dependencies.
A novel face-attention-based discriminator is applied to emphasize local features of face regions.
arXiv Detail & Related papers (2020-02-17T20:00:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.