IDperturb: Enhancing Variation in Synthetic Face Generation via Angular Perturbation
- URL: http://arxiv.org/abs/2602.18831v1
- Date: Sat, 21 Feb 2026 13:23:26 GMT
- Title: IDperturb: Enhancing Variation in Synthetic Face Generation via Angular Perturbation
- Authors: Fadi Boutros, Eduarda Caldeira, Tahar Chettaoui, Naser Damer,
- Abstract summary: Synthetic data has emerged as a practical alternative to authentic face datasets for training face recognition (FR) systems.<n>Recent advances in identity-conditional diffusion models have enabled the generation of photorealistic and identity-consistent face images.<n>We propose IDPERTURB, a simple yet effective geometric-driven sampling strategy to enhance diversity in synthetic face generation.
- Score: 17.433921935288577
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Synthetic data has emerged as a practical alternative to authentic face datasets for training face recognition (FR) systems, especially as privacy and legal concerns increasingly restrict the use of real biometric data. Recent advances in identity-conditional diffusion models have enabled the generation of photorealistic and identity-consistent face images. However, many of these models suffer from limited intra-class variation, an essential property for training robust and generalizable FR models. In this work, we propose IDPERTURB, a simple yet effective geometric-driven sampling strategy to enhance diversity in synthetic face generation. IDPERTURB perturbs identity embeddings within a constrained angular region of the unit hyper-sphere, producing a diverse set of embeddings without modifying the underlying generative model. Each perturbed embedding serves as a conditioning vector for a pre-trained diffusion model, enabling the synthesis of visually varied yet identity-coherent face images suitable for training generalizable FR systems. Empirical results demonstrate that training FR on datasets generated using IDPERTURB yields improved performance across multiple FR benchmarks, compared to existing synthetic data generation approaches.
Related papers
- SCHIGAND: A Synthetic Facial Generation Mode Pipeline [0.0]
This paper presents SCHIGAND, a novel synthetic face generation pipeline to produce highly realistic and controllable facial datasets.<n>SchIGAND enhances identity preservation while generating realistic intra-class variations and maintaining inter-class distinctiveness.<n>The generated datasets were evaluated using ArcFace, a leading facial verification model, to assess their effectiveness in comparison to real-world facial datasets.
arXiv Detail & Related papers (2026-01-23T10:30:58Z) - NegFaceDiff: The Power of Negative Context in Identity-Conditioned Diffusion for Synthetic Face Generation [8.045296450065019]
NegFaceDiff is a novel sampling method that incorporates negative conditions into the identity-conditioned diffusion process.<n>We show that NegFaceDiff significantly improves the identity consistency and separability of data generated by identity-conditioned diffusion models.
arXiv Detail & Related papers (2025-08-13T09:45:09Z) - High-Fidelity Scientific Simulation Surrogates via Adaptive Implicit Neural Representations [51.90920900332569]
Implicit neural representations (INRs) offer a compact and continuous framework for modeling spatially structured data.<n>Recent approaches address this by introducing additional features along rigid geometric structures.<n>We propose a simple yet effective alternative: Feature-Adaptive INR (FA-INR)
arXiv Detail & Related papers (2025-06-07T16:45:17Z) - ID-Booth: Identity-consistent Face Generation with Diffusion Models [27.46650231581887]
We present a novel generative diffusion-based framework called ID-Booth.<n>The framework enables identity-consistent image generation while retaining the synthesis capabilities of pretrained diffusion models.<n>Our method facilitates better intra-identity consistency and inter-identity separability than competing methods, while achieving higher image diversity.
arXiv Detail & Related papers (2025-04-10T02:20:18Z) - Multi-focal Conditioned Latent Diffusion for Person Image Synthesis [59.113899155476005]
The Latent Diffusion Model (LDM) has demonstrated strong capabilities in high-resolution image generation.<n>We propose a Multi-focal Conditioned Latent Diffusion (MCLD) method to address these limitations.<n>Our approach utilizes a multi-focal condition aggregation module, which effectively integrates facial identity and texture-specific information.
arXiv Detail & Related papers (2025-03-19T20:50:10Z) - UIFace: Unleashing Inherent Model Capabilities to Enhance Intra-Class Diversity in Synthetic Face Recognition [42.86969216015855]
Face recognition (FR) stands as one of the most crucial applications in computer vision.<n>We propose a framework to enhance intra-class diversity for synthetic face recognition, shortened as UIFace.<n> Experiments show that our method significantly surpasses previous approaches with even less training data and half the size of synthetic dataset.
arXiv Detail & Related papers (2025-02-27T06:22:18Z) - Exploring Representation-Aligned Latent Space for Better Generation [86.45670422239317]
We introduce ReaLS, which integrates semantic priors to improve generation performance.<n>We show that fundamental DiT and SiT trained on ReaLS can achieve a 15% improvement in FID metric.<n>The enhanced semantic latent space enables more perceptual downstream tasks, such as segmentation and depth estimation.
arXiv Detail & Related papers (2025-02-01T07:42:12Z) - ID$^3$: Identity-Preserving-yet-Diversified Diffusion Models for Synthetic Face Recognition [60.15830516741776]
Synthetic face recognition (SFR) aims to generate datasets that mimic the distribution of real face data.
We introduce a diffusion-fueled SFR model termed $textID3$.
$textID3$ employs an ID-preserving loss to generate diverse yet identity-consistent facial appearances.
arXiv Detail & Related papers (2024-09-26T06:46:40Z) - Conditional Generation from Unconditional Diffusion Models using
Denoiser Representations [94.04631421741986]
We propose adapting pre-trained unconditional diffusion models to new conditions using the learned internal representations of the denoiser network.
We show that augmenting the Tiny ImageNet training set with synthetic images generated by our approach improves the classification accuracy of ResNet baselines by up to 8%.
arXiv Detail & Related papers (2023-06-02T20:09:57Z) - GANDiffFace: Controllable Generation of Synthetic Datasets for Face
Recognition with Realistic Variations [2.7467281625529134]
This study introduces GANDiffFace, a novel framework for the generation of synthetic datasets for face recognition.
GANDiffFace combines the power of Generative Adversarial Networks (GANs) and Diffusion models to overcome the limitations of existing synthetic datasets.
arXiv Detail & Related papers (2023-05-31T15:49:12Z) - Joint Deep Learning of Facial Expression Synthesis and Recognition [97.19528464266824]
We propose a novel joint deep learning of facial expression synthesis and recognition method for effective FER.
The proposed method involves a two-stage learning procedure. Firstly, a facial expression synthesis generative adversarial network (FESGAN) is pre-trained to generate facial images with different facial expressions.
In order to alleviate the problem of data bias between the real images and the synthetic images, we propose an intra-class loss with a novel real data-guided back-propagation (RDBP) algorithm.
arXiv Detail & Related papers (2020-02-06T10:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.