PartMix: Regularization Strategy to Learn Part Discovery for
Visible-Infrared Person Re-identification
- URL: http://arxiv.org/abs/2304.01537v1
- Date: Tue, 4 Apr 2023 05:21:23 GMT
- Title: PartMix: Regularization Strategy to Learn Part Discovery for
Visible-Infrared Person Re-identification
- Authors: Minsu Kim, Seungryong Kim, JungIn Park, Seongheon Park, Kwanghoon Sohn
- Abstract summary: We present a novel data augmentation technique, dubbed PartMix, for part-based Visible-Infrared person Re-IDentification (VI-ReID) models.
We synthesize the augmented samples by mixing the part descriptors across the modalities to improve the performance of part-based VI-ReID models.
- Score: 76.40417061480564
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern data augmentation using a mixture-based technique can regularize the
models from overfitting to the training data in various computer vision
applications, but a proper data augmentation technique tailored for the
part-based Visible-Infrared person Re-IDentification (VI-ReID) models remains
unexplored. In this paper, we present a novel data augmentation technique,
dubbed PartMix, that synthesizes the augmented samples by mixing the part
descriptors across the modalities to improve the performance of part-based
VI-ReID models. Especially, we synthesize the positive and negative samples
within the same and across different identities and regularize the backbone
model through contrastive learning. In addition, we also present an
entropy-based mining strategy to weaken the adverse impact of unreliable
positive and negative samples. When incorporated into existing part-based
VI-ReID model, PartMix consistently boosts the performance. We conduct
experiments to demonstrate the effectiveness of our PartMix over the existing
VI-ReID methods and provide ablation studies.
Related papers
- Exploring Stronger Transformer Representation Learning for Occluded Person Re-Identification [2.552131151698595]
We proposed a novel self-supervision and supervision combining transformer-based person re-identification framework, namely SSSC-TransReID.
We designed a self-supervised contrastive learning branch, which can enhance the feature representation for person re-identification without negative samples or additional pre-training.
Our proposed model obtains superior Re-ID performance consistently and outperforms the state-of-the-art ReID methods by large margins on the mean average accuracy (mAP) and Rank-1 accuracy.
arXiv Detail & Related papers (2024-10-21T03:17:25Z) - DetDiffusion: Synergizing Generative and Perceptive Models for Enhanced Data Generation and Perception [78.26734070960886]
Current perceptive models heavily depend on resource-intensive datasets.
We introduce perception-aware loss (P.A. loss) through segmentation, improving both quality and controllability.
Our method customizes data augmentation by extracting and utilizing perception-aware attribute (P.A. Attr) during generation.
arXiv Detail & Related papers (2024-03-20T04:58:03Z) - Retrosynthesis prediction enhanced by in-silico reaction data
augmentation [66.5643280109899]
We present RetroWISE, a framework that employs a base model inferred from real paired data to perform in-silico reaction generation and augmentation.
On three benchmark datasets, RetroWISE achieves the best overall performance against state-of-the-art models.
arXiv Detail & Related papers (2024-01-31T07:40:37Z) - Augment on Manifold: Mixup Regularization with UMAP [5.18337967156149]
This paper proposes a Mixup regularization scheme, referred to as UMAP Mixup, for automated data augmentation for deep learning predictive models.
The proposed approach ensures that the Mixup operations result in synthesized samples that lie on the data manifold of the features and labels.
arXiv Detail & Related papers (2023-12-20T16:02:25Z) - Inverse Reinforcement Learning for Text Summarization [52.765898203824975]
We introduce inverse reinforcement learning (IRL) as an effective paradigm for training abstractive summarization models.
Experimental results across datasets in different domains demonstrate the superiority of our proposed IRL model for summarization over MLE and RL baselines.
arXiv Detail & Related papers (2022-12-19T23:45:05Z) - Data-Driven Joint Inversions for PDE Models [24.162935839841317]
We propose an integrated data-driven and model-based iterative reconstruction framework for such joint inversion problems.
Our method couples the supplementary data with the PDE model to make the data-driven modeling process consistent with the model-based reconstruction procedure.
arXiv Detail & Related papers (2022-10-17T16:21:45Z) - Reconstructing Training Data from Diverse ML Models by Ensemble
Inversion [8.414622657659168]
Model Inversion (MI), in which an adversary abuses access to a trained Machine Learning (ML) model, has attracted increasing research attention.
We propose an ensemble inversion technique that estimates the distribution of original training data by training a generator constrained by an ensemble of trained models.
We achieve high quality results without any dataset and show how utilizing an auxiliary dataset that's similar to the presumed training data improves the results.
arXiv Detail & Related papers (2021-11-05T18:59:01Z) - Contrastive Model Inversion for Data-Free Knowledge Distillation [60.08025054715192]
We propose Contrastive Model Inversion, where the data diversity is explicitly modeled as an optimizable objective.
Our main observation is that, under the constraint of the same amount of data, higher data diversity usually indicates stronger instance discrimination.
Experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that CMI achieves significantly superior performance when the generated data are used for knowledge distillation.
arXiv Detail & Related papers (2021-05-18T15:13:00Z) - Robust Finite Mixture Regression for Heterogeneous Targets [70.19798470463378]
We propose an FMR model that finds sample clusters and jointly models multiple incomplete mixed-type targets simultaneously.
We provide non-asymptotic oracle performance bounds for our model under a high-dimensional learning framework.
The results show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2020-10-12T03:27:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.