GaitCrafter: Diffusion Model for Biometric Preserving Gait Synthesis
- URL: http://arxiv.org/abs/2508.13300v1
- Date: Mon, 18 Aug 2025 18:32:42 GMT
- Title: GaitCrafter: Diffusion Model for Biometric Preserving Gait Synthesis
- Authors: Sirshapan Mitra, Yogesh S. Rawat,
- Abstract summary: GaitCrafter is a diffusion-based framework for synthesizing realistic gait sequences in the silhouette domain.<n>Our approach enables the generation of temporally consistent and identity-preserving gait sequences.<n>We introduce a mechanism to generate novel identities-synthetic individuals not present in the original dataset.
- Score: 14.174192604480599
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Gait recognition is a valuable biometric task that enables the identification of individuals from a distance based on their walking patterns. However, it remains limited by the lack of large-scale labeled datasets and the difficulty of collecting diverse gait samples for each individual while preserving privacy. To address these challenges, we propose GaitCrafter, a diffusion-based framework for synthesizing realistic gait sequences in the silhouette domain. Unlike prior works that rely on simulated environments or alternative generative models, GaitCrafter trains a video diffusion model from scratch, exclusively on gait silhouette data. Our approach enables the generation of temporally consistent and identity-preserving gait sequences. Moreover, the generation process is controllable-allowing conditioning on various covariates such as clothing, carried objects, and view angle. We show that incorporating synthetic samples generated by GaitCrafter into the gait recognition pipeline leads to improved performance, especially under challenging conditions. Additionally, we introduce a mechanism to generate novel identities-synthetic individuals not present in the original dataset-by interpolating identity embeddings. These novel identities exhibit unique, consistent gait patterns and are useful for training models while maintaining privacy of real subjects. Overall, our work takes an important step toward leveraging diffusion models for high-quality, controllable, and privacy-aware gait data generation.
Related papers
- IDperturb: Enhancing Variation in Synthetic Face Generation via Angular Perturbation [17.433921935288577]
Synthetic data has emerged as a practical alternative to authentic face datasets for training face recognition (FR) systems.<n>Recent advances in identity-conditional diffusion models have enabled the generation of photorealistic and identity-consistent face images.<n>We propose IDPERTURB, a simple yet effective geometric-driven sampling strategy to enhance diversity in synthetic face generation.
arXiv Detail & Related papers (2026-02-21T13:23:26Z) - WithAnyone: Towards Controllable and ID Consistent Image Generation [83.55786496542062]
Identity-consistent generation has become an important focus in text-to-image research.<n>We develop a large-scale paired dataset tailored for multi-person scenarios.<n>We propose a novel training paradigm with a contrastive identity loss that leverages paired data to balance fidelity with diversity.
arXiv Detail & Related papers (2025-10-16T17:59:54Z) - Hybrid Generative Fusion for Efficient and Privacy-Preserving Face Recognition Dataset Generation [87.48785461212556]
We present our approach to the DataCV ICCV Challenge, which centers on building a high-quality face dataset to train a face recognition model.<n>The constructed dataset must not contain identities overlapping with any existing public face datasets.<n>Our method achieves textbf1st place in the competition, and experimental results show that our dataset improves model performance across 10K, 20K, and 100K identity scales.
arXiv Detail & Related papers (2025-08-14T14:14:18Z) - ID-Booth: Identity-consistent Face Generation with Diffusion Models [10.042492056152232]
We present a novel generative diffusion-based framework called ID-Booth.<n>The framework enables identity-consistent image generation while retaining the synthesis capabilities of pretrained diffusion models.<n>Our method facilitates better intra-identity consistency and inter-identity separability than competing methods, while achieving higher image diversity.
arXiv Detail & Related papers (2025-04-10T02:20:18Z) - UIFace: Unleashing Inherent Model Capabilities to Enhance Intra-Class Diversity in Synthetic Face Recognition [42.86969216015855]
Face recognition (FR) stands as one of the most crucial applications in computer vision.<n>We propose a framework to enhance intra-class diversity for synthetic face recognition, shortened as UIFace.<n> Experiments show that our method significantly surpasses previous approaches with even less training data and half the size of synthetic dataset.
arXiv Detail & Related papers (2025-02-27T06:22:18Z) - Unpaired Deblurring via Decoupled Diffusion Model [55.21345354747609]
We propose UID-Diff, a generative-diffusion-based model designed to enhance deblurring performance on unknown domains.<n>We employ two Q-Formers as structural features and blur patterns extractors separately. The features extracted will be used for the supervised deblurring task on synthetic data and the unsupervised blur-transfer task.<n>Experiments on real-world datasets demonstrate that UID-Diff outperforms existing state-of-the-art methods in blur removal and structural preservation.
arXiv Detail & Related papers (2025-02-03T17:00:40Z) - ID$^3$: Identity-Preserving-yet-Diversified Diffusion Models for Synthetic Face Recognition [60.15830516741776]
Synthetic face recognition (SFR) aims to generate datasets that mimic the distribution of real face data.
We introduce a diffusion-fueled SFR model termed $textID3$.
$textID3$ employs an ID-preserving loss to generate diverse yet identity-consistent facial appearances.
arXiv Detail & Related papers (2024-09-26T06:46:40Z) - Synthetic Face Datasets Generation via Latent Space Exploration from Brownian Identity Diffusion [20.352548473293993]
We introduce three complementary algorithms, called Langevin, Dispersion, and DisCo, aimed at generating large synthetic face datasets.<n>With this in hands, we generate several face datasets and benchmark them by training face recognition models, showing that data generated with our method exceeds the performance of previously GAN-based datasets.<n>While diffusion models are shown to memorize training data, we prevent leakage in our new synthetic datasets, paving the way for more responsible synthetic datasets.
arXiv Detail & Related papers (2024-04-30T22:32:02Z) - Segue: Side-information Guided Generative Unlearnable Examples for
Facial Privacy Protection in Real World [64.4289385463226]
We propose Segue: Side-information guided generative unlearnable examples.
To improve transferability, we introduce side information such as true labels and pseudo labels.
It can resist JPEG compression, adversarial training, and some standard data augmentations.
arXiv Detail & Related papers (2023-10-24T06:22:37Z) - RealGait: Gait Recognition for Person Re-Identification [79.67088297584762]
We construct a new gait dataset by extracting silhouettes from an existing video person re-identification challenge which consists of 1,404 persons walking in an unconstrained manner.
Our results suggest that recognizing people by their gait in real surveillance scenarios is feasible and the underlying gait pattern is probably the true reason why video person re-idenfification works in practice.
arXiv Detail & Related papers (2022-01-13T06:30:56Z) - Differentially Private Synthetic Medical Data Generation using
Convolutional GANs [7.2372051099165065]
We develop a differentially private framework for synthetic data generation using R'enyi differential privacy.
Our approach builds on convolutional autoencoders and convolutional generative adversarial networks to preserve some of the critical characteristics of the generated synthetic data.
We demonstrate that our model outperforms existing state-of-the-art models under the same privacy budget.
arXiv Detail & Related papers (2020-12-22T01:03:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.