Combining human parsing with analytical feature extraction and ranking
schemes for high-generalization person reidentification
- URL: http://arxiv.org/abs/2207.14243v1
- Date: Thu, 28 Jul 2022 17:22:48 GMT
- Title: Combining human parsing with analytical feature extraction and ranking
schemes for high-generalization person reidentification
- Authors: Nikita Gabdullin
- Abstract summary: Person reidentification (re-ID) has been receiving increasing attention in recent years due to its importance for both science and society.
Machine learning and particularly Deep Learning (DL) has become the main re-id tool that allowed researches to achieve unprecedented accuracy levels on benchmark datasets.
We present a model without trainable parameters which shows great potential for high generalization.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Person reidentification (re-ID) has been receiving increasing attention in
recent years due to its importance for both science and society. Machine
learning and particularly Deep Learning (DL) has become the main re-id tool
that allowed researches to achieve unprecedented accuracy levels on benchmark
datasets. However, there is a known problem of poor generalization of DL
models. That is, models trained to achieve high accuracy on one dataset perform
poorly on other ones and require re-training. To address this issue, we present
a model without trainable parameters which shows great potential for high
generalization. It combines a fully analytical feature extraction and
similarity ranking scheme with DL-based human parsing used to obtain the
initial subregion classification. We show that such combination to a high
extent eliminates the drawbacks of existing analytical methods. We use
interpretable color and texture features which have human-readable similarity
measures associated with them. To verify the proposed method we conduct
experiments on Market1501 and CUHK03 datasets achieving competitive rank-1
accuracy comparable with that of DL-models. Most importantly we show that our
method achieves 63.9% and 93.5% rank-1 cross-domain accuracy when applied to
transfer learning tasks. It is significantly higher than previously reported
30-50% transfer accuracy. We discuss the potential ways of adding new features
to further improve the model. We also show the advantage of interpretable
features for constructing human-generated queries from verbal description to
conduct search without a query image.
Related papers
- What Do Learning Dynamics Reveal About Generalization in LLM Reasoning? [83.83230167222852]
We find that a model's generalization behavior can be effectively characterized by a training metric we call pre-memorization train accuracy.
By connecting a model's learning behavior to its generalization, pre-memorization train accuracy can guide targeted improvements to training strategies.
arXiv Detail & Related papers (2024-11-12T09:52:40Z) - Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models [115.501751261878]
Fine-tuning language models(LMs) on human-generated data remains a prevalent practice.
We investigate whether we can go beyond human data on tasks where we have access to scalar feedback.
We find that ReST$EM$ scales favorably with model size and significantly surpasses fine-tuning only on human data.
arXiv Detail & Related papers (2023-12-11T18:17:43Z) - Quantifying Human Bias and Knowledge to guide ML models during Training [0.0]
We introduce an experimental approach to dealing with skewed datasets by including humans in the training process.
We ask humans to rank the importance of features of the dataset, and through rank aggregation, determine the initial weight bias for the model.
We show that collective human bias can allow ML models to learn insights about the true population instead of the biased sample.
arXiv Detail & Related papers (2022-11-19T20:49:07Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - X-model: Improving Data Efficiency in Deep Learning with A Minimax Model [78.55482897452417]
We aim at improving data efficiency for both classification and regression setups in deep learning.
To take the power of both worlds, we propose a novel X-model.
X-model plays a minimax game between the feature extractor and task-specific heads.
arXiv Detail & Related papers (2021-10-09T13:56:48Z) - Categorical EHR Imputation with Generative Adversarial Nets [11.171712535005357]
We propose a simple and yet effective approach that is based on previous work on GANs for data imputation.
We show that our imputation approach largely improves the prediction accuracy, compared to more traditional data imputation approaches.
arXiv Detail & Related papers (2021-08-03T18:50:26Z) - Contrastive Model Inversion for Data-Free Knowledge Distillation [60.08025054715192]
We propose Contrastive Model Inversion, where the data diversity is explicitly modeled as an optimizable objective.
Our main observation is that, under the constraint of the same amount of data, higher data diversity usually indicates stronger instance discrimination.
Experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that CMI achieves significantly superior performance when the generated data are used for knowledge distillation.
arXiv Detail & Related papers (2021-05-18T15:13:00Z) - Diverse Knowledge Distillation for End-to-End Person Search [81.4926655119318]
Person search aims to localize and identify a specific person from a gallery of images.
Recent methods can be categorized into two groups, i.e., two-step and end-to-end approaches.
We propose a simple yet strong end-to-end network with diverse knowledge distillation to break the bottleneck.
arXiv Detail & Related papers (2020-12-21T09:04:27Z) - Monotonic Cardinality Estimation of Similarity Selection: A Deep
Learning Approach [22.958342743597044]
We investigate the possibilities of utilizing deep learning for cardinality estimation of similarity selection.
We propose a novel and generic method that can be applied to any data type and distance function.
arXiv Detail & Related papers (2020-02-15T20:22:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.