Face Attribute Invertion
- URL: http://arxiv.org/abs/2001.04665v1
- Date: Tue, 14 Jan 2020 08:41:52 GMT
- Title: Face Attribute Invertion
- Authors: X G Tu, Y Luo, H S Zhang, W J Ai, Z Ma, and M Xie
- Abstract summary: We propose a novel self-perception method based on GANs for automatical face attribute inverse.
Our model is quite stable in training and capable of preserving finer details of the original face images.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Manipulating human facial images between two domains is an important and
interesting problem. Most of the existing methods address this issue by
applying two generators or one generator with extra conditional inputs. In this
paper, we proposed a novel self-perception method based on GANs for automatical
face attribute inverse. The proposed method takes face images as inputs and
employs only one single generator without being conditioned on other inputs.
Profiting from the multi-loss strategy and modified U-net structure, our model
is quite stable in training and capable of preserving finer details of the
original face images.
Related papers
- Effective Adapter for Face Recognition in the Wild [72.75516495170199]
We tackle the challenge of face recognition in the wild, where images often suffer from low quality and real-world distortions.
Traditional approaches-either training models directly on degraded images or their enhanced counterparts using face restoration techniques-have proven ineffective.
We propose an effective adapter for augmenting existing face recognition models trained on high-quality facial datasets.
arXiv Detail & Related papers (2023-12-04T08:55:46Z) - Face0: Instantaneously Conditioning a Text-to-Image Model on a Face [3.5150821092068383]
We present Face0, a novel way to instantaneously condition a text-to-image generation model on a face.
We augment a dataset of annotated images with embeddings of the included faces and train an image generation model, on the augmented dataset.
Our method achieves pleasing results, is remarkably simple, extremely fast, and equips the underlying model with new capabilities.
arXiv Detail & Related papers (2023-06-11T09:52:03Z) - SARGAN: Spatial Attention-based Residuals for Facial Expression
Manipulation [1.7056768055368383]
We present a novel method named SARGAN that addresses the limitations from three perspectives.
We exploited a symmetric encoder-decoder network to attend facial features at multiple scales.
Our proposed model performs significantly better than state-of-the-art methods.
arXiv Detail & Related papers (2023-03-30T08:15:18Z) - Attribute-preserving Face Dataset Anonymization via Latent Code
Optimization [64.4569739006591]
We present a task-agnostic anonymization procedure that directly optimize the images' latent representation in the latent space of a pre-trained GAN.
We demonstrate through a series of experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes.
arXiv Detail & Related papers (2023-03-20T17:34:05Z) - Semantics-Guided Object Removal for Facial Images: with Broad
Applicability and Robust Style Preservation [29.162655333387452]
Object removal and image inpainting in facial images is a task in which objects that occlude a facial image are specifically targeted, removed, and replaced by a properly reconstructed facial image.
Two different approaches utilizing U-net and modulated generator respectively have been widely endorsed for this task for their unique advantages but notwithstanding each method's innate disadvantages.
Here, we propose Semantics-Guided Inpainting Network (SGIN) which itself is a modification of the modulated generator, aiming to take advantage of its advanced generative capability and preserve the high-fidelity details of the original image.
arXiv Detail & Related papers (2022-09-29T00:09:12Z) - Dynamic Prototype Mask for Occluded Person Re-Identification [88.7782299372656]
Existing methods mainly address this issue by employing body clues provided by an extra network to distinguish the visible part.
We propose a novel Dynamic Prototype Mask (DPM) based on two self-evident prior knowledge.
Under this condition, the occluded representation could be well aligned in a selected subspace spontaneously.
arXiv Detail & Related papers (2022-07-19T03:31:13Z) - Towards Diverse and Faithful One-shot Adaption of Generative Adversarial
Networks [54.80435295622583]
One-shot generative domain adaption aims to transfer a pre-trained generator on one domain to a new domain using one reference image only.
We present a novel one-shot generative domain adaption method, i.e., DiFa, for diverse generation and faithful adaptation.
arXiv Detail & Related papers (2022-07-18T16:29:41Z) - FaceController: Controllable Attribute Editing for Face in the Wild [74.56117807309576]
We propose a simple feed-forward network to generate high-fidelity manipulated faces.
By simply employing some existing and easy-obtainable prior information, our method can control, transfer, and edit diverse attributes of faces in the wild.
In our method, we decouple identity, expression, pose, and illumination using 3D priors; separate texture and colors by using region-wise style codes.
arXiv Detail & Related papers (2021-02-23T02:47:28Z) - Attributes Aware Face Generation with Generative Adversarial Networks [133.44359317633686]
We propose a novel attributes aware face image generator method with generative adversarial networks called AFGAN.
Three stacked generators generate $64 times 64$, $128 times 128$ and $256 times 256$ resolution face images respectively.
In addition, an image-attribute matching loss is proposed to enhance the correlation between the generated images and input attributes.
arXiv Detail & Related papers (2020-12-03T09:25:50Z) - S2FGAN: Semantically Aware Interactive Sketch-to-Face Translation [11.724779328025589]
This paper proposes a sketch-to-image generation framework called S2FGAN.
We employ two latent spaces to control the face appearance and adjust the desired attributes of the generated face.
Our method successfully outperforms state-of-the-art methods on attribute manipulation by exploiting greater control of attribute intensity.
arXiv Detail & Related papers (2020-11-30T13:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.