Generalizable Metric Network for Cross-domain Person Re-identification
- URL: http://arxiv.org/abs/2306.11991v2
- Date: Mon, 29 Apr 2024 06:45:43 GMT
- Title: Generalizable Metric Network for Cross-domain Person Re-identification
- Authors: Lei Qi, Ziang Liu, Yinghuan Shi, Xin Geng,
- Abstract summary: Cross-domain (i.e., domain generalization) scene presents a challenge in Re-ID tasks.
Most existing methods aim to learn domain-invariant or robust features for all domains.
We propose a Generalizable Metric Network (GMN) to explore sample similarity in the sample-pair space.
- Score: 55.71632958027289
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Person Re-identification (Re-ID) is a crucial technique for public security and has made significant progress in supervised settings. However, the cross-domain (i.e., domain generalization) scene presents a challenge in Re-ID tasks due to unseen test domains and domain-shift between the training and test sets. To tackle this challenge, most existing methods aim to learn domain-invariant or robust features for all domains. In this paper, we observe that the data-distribution gap between the training and test sets is smaller in the sample-pair space than in the sample-instance space. Based on this observation, we propose a Generalizable Metric Network (GMN) to further explore sample similarity in the sample-pair space. Specifically, we add a Metric Network (M-Net) after the main network and train it on positive and negative sample-pair features, which is then employed during the test stage. Additionally, we introduce the Dropout-based Perturbation (DP) module to enhance the generalization capability of the metric network by enriching the sample-pair diversity. Moreover, we develop a Pair-Identity Center (PIC) loss to enhance the model's discrimination by ensuring that sample-pair features with the same pair-identity are consistent. We validate the effectiveness of our proposed method through a lot of experiments on multiple benchmark datasets and confirm the value of each module in our GMN.
Related papers
- Diverse Deep Feature Ensemble Learning for Omni-Domain Generalized Person Re-identification [30.208890289394994]
Person ReID methods experience a significant drop in performance when trained and tested across different datasets.
Our research reveals that domain generalization methods significantly underperform single-domain supervised methods on single dataset benchmarks.
We propose a way to achieve ODG-ReID by creating deep feature diversity with self-ensembles.
arXiv Detail & Related papers (2024-10-11T02:27:11Z) - Deep Domain Isolation and Sample Clustered Federated Learning for Semantic Segmentation [2.515027627030043]
In this paper, we explore for the first time the effect of covariate shifts between participants' data in 2D segmentation tasks.
We develop Deep Domain Isolation (DDI) to isolate image domains directly in the gradient space of the model.
We leverage this clustering algorithm through a Sample Clustered Federated Learning (SCFL) framework.
arXiv Detail & Related papers (2024-10-04T12:43:07Z) - You Only Train Once: Learning a General Anomaly Enhancement Network with
Random Masks for Hyperspectral Anomaly Detection [31.984085248224574]
We introduce a new approach to address the challenge of generalization in hyperspectral anomaly detection (AD)
Our method eliminates the need for adjusting parameters or retraining on new test scenes as required by most existing methods.
Our method achieves competitive performance when the training and test set are captured by different sensor devices.
arXiv Detail & Related papers (2023-03-31T12:23:56Z) - Learning to Generalize across Domains on Single Test Samples [126.9447368941314]
We learn to generalize across domains on single test samples.
We formulate the adaptation to the single test sample as a variational Bayesian inference problem.
Our model achieves at least comparable -- and often better -- performance than state-of-the-art methods on multiple benchmarks for domain generalization.
arXiv Detail & Related papers (2022-02-16T13:21:04Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z) - Multi-Source Domain Adaptation for Object Detection [52.87890831055648]
We propose a unified Faster R-CNN based framework, termed Divide-and-Merge Spindle Network (DMSN)
DMSN can simultaneously enhance domain innative and preserve discriminative power.
We develop a novel pseudo learning algorithm to approximate optimal parameters of pseudo target subset.
arXiv Detail & Related papers (2021-06-30T03:17:20Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Multi-Domain Adversarial Feature Generalization for Person
Re-Identification [52.835955258959785]
We propose a multi-dataset feature generalization network (MMFA-AAE)
It is capable of learning a universal domain-invariant feature representation from multiple labeled datasets and generalizing it to unseen' camera systems.
It also surpasses many state-of-the-art supervised methods and unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2020-11-25T08:03:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.