Multi-Domain Joint Training for Person Re-Identification
- URL: http://arxiv.org/abs/2201.01983v1
- Date: Thu, 6 Jan 2022 09:20:59 GMT
- Title: Multi-Domain Joint Training for Person Re-Identification
- Authors: Lu Yang, Lingqiao Liu, Yunlong Wang, Peng Wang, and Yanning Zhang
- Abstract summary: Deep learning-based person Re-IDentification (ReID) often requires a large amount of training data to achieve good performance.
It appears that collecting more training data from diverse environments tends to improve the ReID performance.
We propose an approach called Domain-Camera-Sample Dynamic network (DCSD) whose parameters can be adaptive to various factors.
- Score: 51.73921349603597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning-based person Re-IDentification (ReID) often requires a large
amount of training data to achieve good performance. Thus it appears that
collecting more training data from diverse environments tends to improve the
ReID performance. This paper re-examines this common belief and makes a somehow
surprising observation: using more samples, i.e., training with samples from
multiple datasets, does not necessarily lead to better performance by using the
popular ReID models. In some cases, training with more samples may even hurt
the performance of the evaluation is carried out in one of those datasets. We
postulate that this phenomenon is due to the incapability of the standard
network in adapting to diverse environments. To overcome this issue, we propose
an approach called Domain-Camera-Sample Dynamic network (DCSD) whose parameters
can be adaptive to various factors. Specifically, we consider the internal
domain-related factor that can be identified from the input features, and
external domain-related factors, such as domain information or camera
information. Our discovery is that training with such an adaptive model can
better benefit from more training samples. Experimental results show that our
DCSD can greatly boost the performance (up to 12.3%) while joint training in
multiple datasets.
Related papers
- Self-Evolved Diverse Data Sampling for Efficient Instruction Tuning [47.02160072880698]
We introduce a self-evolving mechanism that allows the model itself to actively sample subsets that are equally or even more effective.
The key to our data sampling technique lies in the enhancement of diversity in the chosen subsets.
Extensive experiments across three datasets and benchmarks demonstrate the effectiveness of DiverseEvol.
arXiv Detail & Related papers (2023-11-14T14:10:40Z) - On the Efficacy of Multi-scale Data Samplers for Vision Applications [32.13488876863029]
We show that multi-scale samplers behave as implicit data regularizers and accelerate training speed.
We extend a multi-scale variable batch sampler with a simple curriculum that progressively grows resolutions throughout training.
arXiv Detail & Related papers (2023-09-08T04:29:50Z) - Explaining Cross-Domain Recognition with Interpretable Deep Classifier [100.63114424262234]
Interpretable Deep (IDC) learns the nearest source samples of a target sample as evidence upon which the classifier makes the decision.
Our IDC leads to a more explainable model with almost no accuracy degradation and effectively calibrates classification for optimum reject options.
arXiv Detail & Related papers (2022-11-15T15:58:56Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - Self-Supervised Pre-Training for Transformer-Based Person
Re-Identification [54.55281692768765]
Transformer-based supervised pre-training achieves great performance in person re-identification (ReID)
Due to the domain gap between ImageNet and ReID datasets, it usually needs a larger pre-training dataset to boost the performance.
This work aims to mitigate the gap between the pre-training and ReID datasets from the perspective of data and model structure.
arXiv Detail & Related papers (2021-11-23T18:59:08Z) - Unsupervised Domain Adaptive Learning via Synthetic Data for Person
Re-identification [101.1886788396803]
Person re-identification (re-ID) has gained more and more attention due to its widespread applications in video surveillance.
Unfortunately, the mainstream deep learning methods still need a large quantity of labeled data to train models.
In this paper, we develop a data collector to automatically generate synthetic re-ID samples in a computer game, and construct a data labeler to simultaneously annotate them.
arXiv Detail & Related papers (2021-09-12T15:51:41Z) - A Systematic Evaluation of Domain Adaptation in Facial Expression
Recognition [0.0]
This paper provides a systematic evaluation of domain adaptation in facial expression recognition.
We use state-of-the-art transfer learning techniques and six commonly-used facial expression datasets.
We find sobering results that the accuracy of transfer learning is not high, and varies idiosyncratically with the target dataset.
arXiv Detail & Related papers (2021-06-29T14:41:19Z) - Reinforced Data Sampling for Model Diversification [15.547681142342846]
This paper proposes a new Reinforced Data Sampling (RDS) method to learn how to sample data adequately.
We formulate the optimisation problem of model diversification $delta-div$ in data sampling to maximise learning potentials and optimum allocation by injecting model diversity.
Our results suggest that the trainable sampling for model diversification is useful for competition organisers, researchers, or even starters to pursue full potentials of various machine learning tasks.
arXiv Detail & Related papers (2020-06-12T11:46:13Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.