Age-Oriented Face Synthesis with Conditional Discriminator Pool and
Adversarial Triplet Loss
- URL: http://arxiv.org/abs/2007.00792v2
- Date: Fri, 3 Jul 2020 23:31:58 GMT
- Title: Age-Oriented Face Synthesis with Conditional Discriminator Pool and
Adversarial Triplet Loss
- Authors: Haoyi Wang, Victor Sanchez, Chang-Tsun Li
- Abstract summary: We propose a method for the age-oriented face synthesis task that achieves a high synthesis accuracy with strong identity permanence capabilities.
Our method tackles the mode collapse issue with a novel Conditional Discriminator Pool (CDP), which consists of multiple discriminators.
To achieve strong identity permanence capabilities, our method uses a novel Adversarial Triplet loss.
- Score: 39.94126642748073
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The vanilla Generative Adversarial Networks (GAN) are commonly used to
generate realistic images depicting aged and rejuvenated faces. However, the
performance of such vanilla GANs in the age-oriented face synthesis task is
often compromised by the mode collapse issue, which may result in the
generation of faces with minimal variations and a poor synthesis accuracy. In
addition, recent age-oriented face synthesis methods use the L1 or L2
constraint to preserve the identity information on synthesized faces, which
implicitly limits the identity permanence capabilities when these constraints
are associated with a trivial weighting factor. In this paper, we propose a
method for the age-oriented face synthesis task that achieves a high synthesis
accuracy with strong identity permanence capabilities. Specifically, to achieve
a high synthesis accuracy, our method tackles the mode collapse issue with a
novel Conditional Discriminator Pool (CDP), which consists of multiple
discriminators, each targeting one particular age category. To achieve strong
identity permanence capabilities, our method uses a novel Adversarial Triplet
loss. This loss, which is based on the Triplet loss, adds a ranking operation
to further pull the positive embedding towards the anchor embedding resulting
in significantly reduced intra-class variances in the feature space. Through
extensive experiments, we show that our proposed method outperforms
state-of-the-art methods in terms of synthesis accuracy and identity permanence
capabilities, qualitatively and quantitatively.
Related papers
- ID$^3$: Identity-Preserving-yet-Diversified Diffusion Models for Synthetic Face Recognition [60.15830516741776]
Synthetic face recognition (SFR) aims to generate datasets that mimic the distribution of real face data.
We introduce a diffusion-fueled SFR model termed $textID3$.
$textID3$ employs an ID-preserving loss to generate diverse yet identity-consistent facial appearances.
arXiv Detail & Related papers (2024-09-26T06:46:40Z) - Cross-Age Contrastive Learning for Age-Invariant Face Recognition [29.243096587091575]
Cross-age facial images are typically challenging and expensive to collect.
Images of the same subject at different ages are usually hard or even impossible to obtain.
We propose a novel semi-supervised learning approach named Cross-Age Contrastive Learning (CACon)
arXiv Detail & Related papers (2023-12-18T13:41:21Z) - When Age-Invariant Face Recognition Meets Face Age Synthesis: A
Multi-Task Learning Framework and A New Benchmark [45.31997043789471]
MTLFace can learn the age-invariant identity-related representation for face recognition while achieving pleasing face synthesis for model interpretation.
We release a large cross-age face dataset with age and gender annotations, and a new benchmark specifically designed for tracing long-missing children.
arXiv Detail & Related papers (2022-10-17T07:04:19Z) - Delving into High-Quality Synthetic Face Occlusion Segmentation Datasets [83.749895930242]
We propose two techniques for producing high-quality naturalistic synthetic occluded faces.
We empirically show the effectiveness and robustness of both methods, even for unseen occlusions.
We present two high-resolution real-world occluded face datasets with fine-grained annotations, RealOcc and RealOcc-Wild.
arXiv Detail & Related papers (2022-05-12T17:03:57Z) - A Unified Framework for Biphasic Facial Age Translation with
Noisy-Semantic Guided Generative Adversarial Networks [54.57520952117123]
Biphasic facial age translation aims at predicting the appearance of the input face at any age.
We propose a unified framework for biphasic facial age translation with noisy-semantic guided generative adversarial networks.
arXiv Detail & Related papers (2021-09-15T15:30:35Z) - When Age-Invariant Face Recognition Meets Face Age Synthesis: A
Multi-Task Learning Framework [20.579282497730944]
MTLFace can learn age-invariant identity-related representation while achieving pleasing face synthesis.
In contrast to the conventional one-hot encoding that achieves group-level FAS, we propose a novel identity conditional module to achieve identity-level FAS.
Extensive experiments on five benchmark cross-age datasets demonstrate the superior performance of our proposed MTLFace.
arXiv Detail & Related papers (2021-03-02T07:03:27Z) - Continuous Face Aging Generative Adversarial Networks [11.75204350455584]
Face aging is the task aiming to translate the faces in input images to designated ages.
Previous methods have limited themselves only able to produce discrete age groups, each of which consists of ten years.
We propose the continuous face aging generative adversarial networks (CFA-GAN)
arXiv Detail & Related papers (2021-02-26T06:22:25Z) - PFA-GAN: Progressive Face Aging with Generative Adversarial Network [19.45760984401544]
This paper proposes a novel progressive face aging framework based on generative adversarial network (PFA-GAN)
The framework can be trained in an end-to-end manner to eliminate accumulative artifacts and blurriness.
Extensively experimental results demonstrate superior performance over existing (c)GANs-based methods.
arXiv Detail & Related papers (2020-12-07T05:45:13Z) - 3D Dense Geometry-Guided Facial Expression Synthesis by Adversarial
Learning [54.24887282693925]
We propose a novel framework to exploit 3D dense (depth and surface normals) information for expression manipulation.
We use an off-the-shelf state-of-the-art 3D reconstruction model to estimate the depth and create a large-scale RGB-Depth dataset.
Our experiments demonstrate that the proposed method outperforms the competitive baseline and existing arts by a large margin.
arXiv Detail & Related papers (2020-09-30T17:12:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.