A Review on Generative Adversarial Networks for Data Augmentation in
Person Re-Identification Systems
- URL: http://arxiv.org/abs/2302.09119v3
- Date: Fri, 9 Jun 2023 20:27:16 GMT
- Title: A Review on Generative Adversarial Networks for Data Augmentation in
Person Re-Identification Systems
- Authors: Victor Uc-Cetina, Laura Alvarez-Gonzalez, Anabel Martin-Gonzalez
- Abstract summary: In machine learning-based computer vision applications with reduced data sets, one possibility to improve the performance of re-identification system is through the augmentation of the set of images or videos available for training the neural models.
This article reviews the most relevant recent approaches to improve the performance of person re-identification models through data augmentation, using generative adversarial networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Interest in automatic people re-identification systems has significantly
grown in recent years, mainly for developing surveillance and smart shops
software. Due to the variability in person posture, different lighting
conditions, and occluded scenarios, together with the poor quality of the
images obtained by different cameras, it is currently an unsolved problem. In
machine learning-based computer vision applications with reduced data sets, one
possibility to improve the performance of re-identification system is through
the augmentation of the set of images or videos available for training the
neural models. Currently, one of the most robust ways to generate synthetic
information for data augmentation, whether it is video, images or text, are the
generative adversarial networks. This article reviews the most relevant recent
approaches to improve the performance of person re-identification models
through data augmentation, using generative adversarial networks. We focus on
three categories of data augmentation approaches: style transfer, pose
transfer, and random generation.
Related papers
- A Review of Image Retrieval Techniques: Data Augmentation and Adversarial Learning Approaches [0.0]
This review focuses on the roles of data augmentation and adversarial learning techniques in enhancing retrieval performance.
Data augmentation enhances the model's generalization ability and robustness by generating more diverse training samples, simulating real-world variations, and reducing overfitting.
adversarial attacks and defenses introduce perturbations during training to improve the model's robustness against potential attacks.
arXiv Detail & Related papers (2024-09-02T12:55:17Z) - A Simple Background Augmentation Method for Object Detection with Diffusion Model [53.32935683257045]
In computer vision, it is well-known that a lack of data diversity will impair model performance.
We propose a simple yet effective data augmentation approach by leveraging advancements in generative models.
Background augmentation, in particular, significantly improves the models' robustness and generalization capabilities.
arXiv Detail & Related papers (2024-08-01T07:40:00Z) - Synthesizing Efficient Data with Diffusion Models for Person Re-Identification Pre-Training [51.87027943520492]
We present a novel paradigm Diffusion-ReID to efficiently augment and generate diverse images based on known identities.
Benefiting from our proposed paradigm, we first create a new large-scale person Re-ID dataset Diff-Person, which consists of over 777K images from 5,183 identities.
arXiv Detail & Related papers (2024-06-10T06:26:03Z) - Training on Thin Air: Improve Image Classification with Generated Data [28.96941414724037]
Diffusion Inversion is a simple yet effective method to generate diverse, high-quality training data for image classification.
Our approach captures the original data distribution and ensures data coverage by inverting images to the latent space of Stable Diffusion.
We identify three key components that allow our generated images to successfully supplant the original dataset.
arXiv Detail & Related papers (2023-05-24T16:33:02Z) - Effective Data Augmentation With Diffusion Models [65.09758931804478]
We address the lack of diversity in data augmentation with image-to-image transformations parameterized by pre-trained text-to-image diffusion models.
Our method edits images to change their semantics using an off-the-shelf diffusion model, and generalizes to novel visual concepts from a few labelled examples.
We evaluate our approach on few-shot image classification tasks, and on a real-world weed recognition task, and observe an improvement in accuracy in tested domains.
arXiv Detail & Related papers (2023-02-07T20:42:28Z) - Robust Semi-supervised Federated Learning for Images Automatic
Recognition in Internet of Drones [57.468730437381076]
We present a Semi-supervised Federated Learning (SSFL) framework for privacy-preserving UAV image recognition.
There are significant differences in the number, features, and distribution of local data collected by UAVs using different camera modules.
We propose an aggregation rule based on the frequency of the client's participation in training, namely the FedFreq aggregation rule.
arXiv Detail & Related papers (2022-01-03T16:49:33Z) - Learning Representational Invariances for Data-Efficient Action
Recognition [52.23716087656834]
We show that our data augmentation strategy leads to promising performance on the Kinetics-100, UCF-101, and HMDB-51 datasets.
We also validate our data augmentation strategy in the fully supervised setting and demonstrate improved performance.
arXiv Detail & Related papers (2021-03-30T17:59:49Z) - A 3D GAN for Improved Large-pose Facial Recognition [3.791440300377753]
Facial recognition using deep convolutional neural networks relies on the availability of large datasets of face images.
Recent studies have shown that current methods of disentangling pose from identity are inadequate.
In this work we incorporate a 3D morphable model into the generator of a GAN in order to learn a nonlinear texture model from in-the-wild images.
This allows generation of new, synthetic identities, and manipulation of pose, illumination and expression without compromising the identity.
arXiv Detail & Related papers (2020-12-18T22:41:15Z) - Encoding Robustness to Image Style via Adversarial Feature Perturbations [72.81911076841408]
We adapt adversarial training by directly perturbing feature statistics, rather than image pixels, to produce robust models.
Our proposed method, Adversarial Batch Normalization (AdvBN), is a single network layer that generates worst-case feature perturbations during training.
arXiv Detail & Related papers (2020-09-18T17:52:34Z) - Camera On-boarding for Person Re-identification using Hypothesis
Transfer Learning [41.115022307850424]
We develop an efficient model adaptation approach using hypothesis transfer learning for person re-identification.
Our approach minimizes the effect of negative transfer by finding an optimal weighted combination of multiple source models for transferring the knowledge.
arXiv Detail & Related papers (2020-07-22T00:43:29Z) - CIAGAN: Conditional Identity Anonymization Generative Adversarial
Networks [12.20367903755194]
CIAGAN is a model for image and video anonymization based on conditional generative adversarial networks.
Our model is able to remove the identifying characteristics of faces and bodies while producing high-quality images and videos.
arXiv Detail & Related papers (2020-05-19T15:56:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.