Identity-preserving Editing of Multiple Facial Attributes by Learning
Global Edit Directions and Local Adjustments
- URL: http://arxiv.org/abs/2309.14267v1
- Date: Mon, 25 Sep 2023 16:28:39 GMT
- Title: Identity-preserving Editing of Multiple Facial Attributes by Learning
Global Edit Directions and Local Adjustments
- Authors: Najmeh Mohammadbagheri, Fardin Ayar, Ahmad Nickabadi, Reza Safabakhsh
- Abstract summary: ID-Style is a new architecture capable of addressing the problem of identity loss during attribute manipulation.
We introduce two losses during training to enforce the LGD to find semi-sparse semantic directions, which along with the IAIP, preserve the identity of the input instance.
- Score: 4.082799056366928
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semantic facial attribute editing using pre-trained Generative Adversarial
Networks (GANs) has attracted a great deal of attention and effort from
researchers in recent years. Due to the high quality of face images generated
by StyleGANs, much work has focused on the StyleGANs' latent space and the
proposed methods for facial image editing. Although these methods have achieved
satisfying results for manipulating user-intended attributes, they have not
fulfilled the goal of preserving the identity, which is an important challenge.
We present ID-Style, a new architecture capable of addressing the problem of
identity loss during attribute manipulation. The key components of ID-Style
include Learnable Global Direction (LGD), which finds a shared and semi-sparse
direction for each attribute, and an Instance-Aware Intensity Predictor (IAIP)
network, which finetunes the global direction according to the input instance.
Furthermore, we introduce two losses during training to enforce the LGD to find
semi-sparse semantic directions, which along with the IAIP, preserve the
identity of the input instance. Despite reducing the size of the network by
roughly 95% as compared to similar state-of-the-art works, it outperforms
baselines by 10% and 7% in Identity preserving metric (FRS) and average
accuracy of manipulation (mACC), respectively.
Related papers
- Latent Diffusion Models for Attribute-Preserving Image Anonymization [4.080920304681247]
This paper presents the first approach to image anonymization based on Latent Diffusion Models (LDMs)
We propose two LDMs for this purpose: CAFLaGE-Base exploits a combination of pre-trained ControlNets, and a new controlling mechanism designed to increase the distance between the real and anonymized images.
arXiv Detail & Related papers (2024-03-21T19:09:21Z) - Adaptive Face Recognition Using Adversarial Information Network [57.29464116557734]
Face recognition models often degenerate when training data are different from testing data.
We propose a novel adversarial information network (AIN) to address it.
arXiv Detail & Related papers (2023-05-23T02:14:11Z) - StyleID: Identity Disentanglement for Anonymizing Faces [4.048444203617942]
The main contribution of the paper is the design of a feature-preserving anonymization framework, StyleID.
As part of the contribution, we present a novel disentanglement metric, three complementing disentanglement methods, and new insights into identity disentanglement.
StyleID provides tunable privacy, has low computational complexity, and is shown to outperform current state-of-the-art solutions.
arXiv Detail & Related papers (2022-12-28T12:04:24Z) - TransFA: Transformer-based Representation for Face Attribute Evaluation [87.09529826340304]
We propose a novel textbftransformer-based representation for textbfattribute evaluation method (textbfTransFA)
The proposed TransFA achieves superior performances compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-07-12T10:58:06Z) - A Unified Architecture of Semantic Segmentation and Hierarchical
Generative Adversarial Networks for Expression Manipulation [52.911307452212256]
We develop a unified architecture of semantic segmentation and hierarchical GANs.
A unique advantage of our framework is that on forward pass the semantic segmentation network conditions the generative model.
We evaluate our method on two challenging facial expression translation benchmarks, AffectNet and RaFD, and a semantic segmentation benchmark, CelebAMask-HQ.
arXiv Detail & Related papers (2021-12-08T22:06:31Z) - FacialGAN: Style Transfer and Attribute Manipulation on Synthetic Faces [9.664892091493586]
FacialGAN is a novel framework enabling simultaneous rich style transfers and interactive facial attributes manipulation.
We show our model's capacity in producing visually compelling results in style transfer, attribute manipulation, diversity and face verification.
arXiv Detail & Related papers (2021-10-18T15:53:38Z) - FaceController: Controllable Attribute Editing for Face in the Wild [74.56117807309576]
We propose a simple feed-forward network to generate high-fidelity manipulated faces.
By simply employing some existing and easy-obtainable prior information, our method can control, transfer, and edit diverse attributes of faces in the wild.
In our method, we decouple identity, expression, pose, and illumination using 3D priors; separate texture and colors by using region-wise style codes.
arXiv Detail & Related papers (2021-02-23T02:47:28Z) - CAFE-GAN: Arbitrary Face Attribute Editing with Complementary Attention
Feature [31.425326840578098]
We propose a novel GAN model which is designed to edit only the parts of a face pertinent to the target attributes.
CAFE identifies the facial regions to be transformed by considering both target attributes as well as complementary attributes.
arXiv Detail & Related papers (2020-11-24T05:21:03Z) - PA-GAN: Progressive Attention Generative Adversarial Network for Facial
Attribute Editing [67.94255549416548]
We propose a progressive attention GAN (PA-GAN) for facial attribute editing.
Our approach achieves correct attribute editing with irrelevant details much better preserved compared with the state-of-the-arts.
arXiv Detail & Related papers (2020-07-12T03:04:12Z) - Deep Multi-task Multi-label CNN for Effective Facial Attribute
Classification [53.58763562421771]
We propose a novel deep multi-task multi-label CNN, termed DMM-CNN, for effective Facial Attribute Classification (FAC)
Specifically, DMM-CNN jointly optimize two closely-related tasks (i.e., facial landmark detection and FAC) to improve the performance of FAC by taking advantage of multi-task learning.
Two different network architectures are respectively designed to extract features for two groups of attributes, and a novel dynamic weighting scheme is proposed to automatically assign the loss weight to each facial attribute during training.
arXiv Detail & Related papers (2020-02-10T12:34:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.