AttriMeter: An Attribute-guided Metric Interpreter for Person
Re-Identification
- URL: http://arxiv.org/abs/2103.01451v1
- Date: Tue, 2 Mar 2021 03:37:48 GMT
- Title: AttriMeter: An Attribute-guided Metric Interpreter for Person
Re-Identification
- Authors: Xiaodong Chen, Xinchen Liu, Wu Liu, Xiao-Ping Zhang, Yongdong Zhang,
and Tao Mei
- Abstract summary: Person ReID systems only provide a distance or similarity when matching two persons.
We propose an Attribute-guided Metric Interpreter, named AttriMeter, to semantically and quantitatively explain the results of CNN-based ReID models.
- Score: 100.3112429685558
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Person Re-identification (ReID) has achieved significant improvement due to
the adoption of Convolutional Neural Networks (CNNs). However, person ReID
systems only provide a distance or similarity when matching two persons, which
makes users hardly understand why they are similar or not. Therefore, we
propose an Attribute-guided Metric Interpreter, named AttriMeter, to
semantically and quantitatively explain the results of CNN-based ReID models.
The AttriMeter has a pluggable structure that can be grafted on arbitrary
target models, i.e., the ReID models that need to be interpreted. With an
attribute decomposition head, it can learn to generate a group of
attribute-guided attention maps (AAMs) from the target model. By applying AAMs
to features of two persons from the target model, their distance will be
decomposed into a set of attribute-guided components that can measure the
contributions of individual attributes. Moreover, we design a distance
distillation loss to guarantee the consistency between the results from the
target model and the decomposed components from AttriMeter, and an attribute
prior loss to eliminate the biases caused by the unbalanced distribution of
attributes. Finally, extensive experiments and analysis on a variety of ReID
models and datasets show the effectiveness of AttriMeter.
Related papers
- Analyzing Generative Models by Manifold Entropic Metrics [8.477943884416023]
We introduce a novel set of tractable information-theoretic evaluation metrics.
We compare various normalizing flow architectures and $beta$-VAEs on the EMNIST dataset.
The most interesting finding of our experiments is a ranking of model architectures and training procedures in terms of their inductive bias to converge to aligned and disentangled representations during training.
arXiv Detail & Related papers (2024-10-25T09:35:00Z) - Entity-Aware Biaffine Attention Model for Improved Constituent Parsing with Reduced Entity Violations [0.0]
We propose an entity-aware biaffine attention model for constituent parsing.
This model incorporates entity information into the biaffine attention mechanism by using additional entity role vectors for potential phrases.
We introduce a new metric, the Entity Violating Rate (EVR), to quantify the extent of entity violations in parsing results.
arXiv Detail & Related papers (2024-09-01T05:59:54Z) - Measuring Feature Dependency of Neural Networks by Collapsing Feature Dimensions in the Data Manifold [18.64569268049846]
We introduce a new technique to measure the feature dependency of neural network models.
The motivation is to better understand a model by querying whether it is using information from human-understandable features.
We test our method on deep neural network models trained on synthetic image data with known ground truth.
arXiv Detail & Related papers (2024-04-18T17:10:18Z) - SSPNet: Scale and Spatial Priors Guided Generalizable and Interpretable
Pedestrian Attribute Recognition [23.55622798950833]
A novel Scale and Spatial Priors Guided Network (SSPNet) is proposed for Pedestrian Attribute Recognition (PAR) models.
SSPNet learns to provide reasonable scale prior information for different attribute groups, allowing the model to focus on different levels of feature maps.
A novel IoU based attribute localization metric is proposed for Weakly-supervised Pedestrian Attribute localization (WPAL) based on the improved Grad-CAM for attribute response mask.
arXiv Detail & Related papers (2023-12-11T00:41:40Z) - Attribute Based Interpretable Evaluation Metrics for Generative Models [14.407813583528968]
We propose a new evaluation protocol that measures the divergence of a set of generated images from the training set regarding the distribution of attribute strengths.
Our metrics lay a foundation for explainable evaluations of generative models.
arXiv Detail & Related papers (2023-10-26T09:25:09Z) - Revisiting the Evaluation of Image Synthesis with GANs [55.72247435112475]
This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models.
In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set.
arXiv Detail & Related papers (2023-04-04T17:54:32Z) - Discover, Explanation, Improvement: An Automatic Slice Detection
Framework for Natural Language Processing [72.14557106085284]
slice detection models (SDM) automatically identify underperforming groups of datapoints.
This paper proposes a benchmark named "Discover, Explain, improve (DEIM)" for classification NLP tasks.
Our evaluation shows that Edisa can accurately select error-prone datapoints with informative semantic features.
arXiv Detail & Related papers (2022-11-08T19:00:00Z) - Label-Free Model Evaluation with Semi-Structured Dataset Representations [78.54590197704088]
Label-free model evaluation, or AutoEval, estimates model accuracy on unlabeled test sets.
In the absence of image labels, based on dataset representations, we estimate model performance for AutoEval with regression.
We propose a new semi-structured dataset representation that is manageable for regression learning while containing rich information for AutoEval.
arXiv Detail & Related papers (2021-12-01T18:15:58Z) - How Faithful is your Synthetic Data? Sample-level Metrics for Evaluating
and Auditing Generative Models [95.8037674226622]
We introduce a 3-dimensional evaluation metric that characterizes the fidelity, diversity and generalization performance of any generative model in a domain-agnostic fashion.
Our metric unifies statistical divergence measures with precision-recall analysis, enabling sample- and distribution-level diagnoses of model fidelity and diversity.
arXiv Detail & Related papers (2021-02-17T18:25:30Z) - Robust Finite Mixture Regression for Heterogeneous Targets [70.19798470463378]
We propose an FMR model that finds sample clusters and jointly models multiple incomplete mixed-type targets simultaneously.
We provide non-asymptotic oracle performance bounds for our model under a high-dimensional learning framework.
The results show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2020-10-12T03:27:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.