Data Synthesis with Diverse Styles for Face Recognition via 3DMM-Guided Diffusion
- URL: http://arxiv.org/abs/2504.00430v1
- Date: Tue, 01 Apr 2025 05:22:53 GMT
- Title: Data Synthesis with Diverse Styles for Face Recognition via 3DMM-Guided Diffusion
- Authors: Yuxi Mi, Zhizhou Zhong, Yuge Huang, Qiuyang Yuan, Xuan Zhao, Jianqing Xu, Shouhong Ding, ShaoMing Wang, Rizen Guo, Shuigeng Zhou,
- Abstract summary: Identity-preserving face synthesis aims to generate synthetic face images of virtual subjects that can substitute real-world data for training face recognition models.<n>Prior arts strive to create images with consistent identities and diverse styles, but they face a trade-off between them.<n>This paper introduces MorphFace, a diffusion-based face generator.
- Score: 37.847141686823264
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Identity-preserving face synthesis aims to generate synthetic face images of virtual subjects that can substitute real-world data for training face recognition models. While prior arts strive to create images with consistent identities and diverse styles, they face a trade-off between them. Identifying their limitation of treating style variation as subject-agnostic and observing that real-world persons actually have distinct, subject-specific styles, this paper introduces MorphFace, a diffusion-based face generator. The generator learns fine-grained facial styles, e.g., shape, pose and expression, from the renderings of a 3D morphable model (3DMM). It also learns identities from an off-the-shelf recognition model. To create virtual faces, the generator is conditioned on novel identities of unlabeled synthetic faces, and novel styles that are statistically sampled from a real-world prior distribution. The sampling especially accounts for both intra-subject variation and subject distinctiveness. A context blending strategy is employed to enhance the generator's responsiveness to identity and style conditions. Extensive experiments show that MorphFace outperforms the best prior arts in face recognition efficacy.
Related papers
- InstaFace: Identity-Preserving Facial Editing with Single Image Inference [13.067402877443902]
We introduce a novel diffusion-based framework, InstaFace, to generate realistic images while preserving identity using only a single image.<n>InstaFace harnesses 3D perspectives by integrating multiple 3DMM-based conditionals without introducing additional trainable parameters.<n>Our method outperforms several state-of-the-art approaches in terms of identity preservation, photorealism, and effective control of pose, expression, and lighting.
arXiv Detail & Related papers (2025-02-27T22:37:09Z) - OSDFace: One-Step Diffusion Model for Face Restoration [72.5045389847792]
Diffusion models have demonstrated impressive performance in face restoration.
We propose OSDFace, a novel one-step diffusion model for face restoration.
Results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics.
arXiv Detail & Related papers (2024-11-26T07:07:48Z) - ID$^3$: Identity-Preserving-yet-Diversified Diffusion Models for Synthetic Face Recognition [60.15830516741776]
Synthetic face recognition (SFR) aims to generate datasets that mimic the distribution of real face data.
We introduce a diffusion-fueled SFR model termed $textID3$.
$textID3$ employs an ID-preserving loss to generate diverse yet identity-consistent facial appearances.
arXiv Detail & Related papers (2024-09-26T06:46:40Z) - TCDiff: Triple Condition Diffusion Model with 3D Constraints for Stylizing Synthetic Faces [1.7535229154829601]
Face recognition experiments using 1k, 2k, and 5k classes of our new dataset for training outperform state-of-the-art synthetic datasets in real face benchmarks.
arXiv Detail & Related papers (2024-09-05T14:59:41Z) - Adversarial Identity Injection for Semantic Face Image Synthesis [6.763801424109435]
We present an SIS architecture that exploits a cross-attention mechanism to merge identity, style, and semantic features to generate faces.
Experimental results reveal that the proposed method is not only suitable for preserving the identity but is also effective in the face recognition adversarial attack.
arXiv Detail & Related papers (2024-04-16T09:19:23Z) - VIGFace: Virtual Identity Generation for Privacy-Free Face Recognition [13.81887339529775]
VIGFace is a novel framework capable of generating synthetic facial images.<n>It shows clear separability between existing individuals and virtual face images, allowing one to create synthetic images with confidence.<n>It ensures improved performance through data augmentation by incorporating real existing images.
arXiv Detail & Related papers (2024-03-13T06:11:41Z) - 3DMM-RF: Convolutional Radiance Fields for 3D Face Modeling [111.98096975078158]
We introduce a style-based generative network that synthesizes in one pass all and only the required rendering samples of a neural radiance field.
We show that this model can accurately be fit to "in-the-wild" facial images of arbitrary pose and illumination, extract the facial characteristics, and be used to re-render the face in controllable conditions.
arXiv Detail & Related papers (2022-09-15T15:28:45Z) - ImFace: A Nonlinear 3D Morphable Face Model with Implicit Neural
Representations [21.389170615787368]
This paper presents a novel 3D morphable face model, namely ImFace, to learn a nonlinear and continuous space with implicit neural representations.
It builds two explicitly disentangled deformation fields to model complex shapes associated with identities and expressions, respectively, and designs an improved learning strategy to extend embeddings of expressions.
In addition to ImFace, an effective preprocessing pipeline is proposed to address the issue of watertight input requirement in implicit representations.
arXiv Detail & Related papers (2022-03-28T05:37:59Z) - SynFace: Face Recognition with Synthetic Data [83.15838126703719]
We devise the SynFace with identity mixup (IM) and domain mixup (DM) to mitigate the performance gap.
We also perform a systematically empirical analysis on synthetic face images to provide some insights on how to effectively utilize synthetic data for face recognition.
arXiv Detail & Related papers (2021-08-18T03:41:54Z) - Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo
Collection [65.92058628082322]
Non-parametric face modeling aims to reconstruct 3D face only from images without shape assumptions.
This paper presents a novel Learning to Aggregate and Personalize framework for unsupervised robust 3D face modeling.
arXiv Detail & Related papers (2021-06-15T03:10:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.