Multi-Domain Adversarial Feature Generalization for Person
Re-Identification
- URL: http://arxiv.org/abs/2011.12563v1
- Date: Wed, 25 Nov 2020 08:03:15 GMT
- Title: Multi-Domain Adversarial Feature Generalization for Person
Re-Identification
- Authors: Shan Lin, Chang-Tsun Li, Alex C. Kot
- Abstract summary: We propose a multi-dataset feature generalization network (MMFA-AAE)
It is capable of learning a universal domain-invariant feature representation from multiple labeled datasets and generalizing it to unseen' camera systems.
It also surpasses many state-of-the-art supervised methods and unsupervised domain adaptation methods by a large margin.
- Score: 52.835955258959785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the assistance of sophisticated training methods applied to single
labeled datasets, the performance of fully-supervised person re-identification
(Person Re-ID) has been improved significantly in recent years. However, these
models trained on a single dataset usually suffer from considerable performance
degradation when applied to videos of a different camera network. To make
Person Re-ID systems more practical and scalable, several cross-dataset domain
adaptation methods have been proposed, which achieve high performance without
the labeled data from the target domain. However, these approaches still
require the unlabeled data of the target domain during the training process,
making them impractical. A practical Person Re-ID system pre-trained on other
datasets should start running immediately after deployment on a new site
without having to wait until sufficient images or videos are collected and the
pre-trained model is tuned. To serve this purpose, in this paper, we
reformulate person re-identification as a multi-dataset domain generalization
problem. We propose a multi-dataset feature generalization network (MMFA-AAE),
which is capable of learning a universal domain-invariant feature
representation from multiple labeled datasets and generalizing it to `unseen'
camera systems. The network is based on an adversarial auto-encoder to learn a
generalized domain-invariant latent feature representation with the Maximum
Mean Discrepancy (MMD) measure to align the distributions across multiple
domains. Extensive experiments demonstrate the effectiveness of the proposed
method. Our MMFA-AAE approach not only outperforms most of the domain
generalization Person Re-ID methods, but also surpasses many state-of-the-art
supervised methods and unsupervised domain adaptation methods by a large
margin.
Related papers
- Diverse Deep Feature Ensemble Learning for Omni-Domain Generalized Person Re-identification [30.208890289394994]
Person ReID methods experience a significant drop in performance when trained and tested across different datasets.
Our research reveals that domain generalization methods significantly underperform single-domain supervised methods on single dataset benchmarks.
We propose a way to achieve ODG-ReID by creating deep feature diversity with self-ensembles.
arXiv Detail & Related papers (2024-10-11T02:27:11Z) - Deep Multimodal Fusion for Generalizable Person Re-identification [15.250738959921872]
DMF is a Deep Multimodal Fusion network for the general scenarios on person re-identification task.
Rich semantic knowledge is introduced to assist in feature representation learning during the pre-training stage.
A realistic dataset is adopted to fine-tine the pre-trained model for distribution alignment with real-world.
arXiv Detail & Related papers (2022-11-02T07:42:48Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - A Novel Mix-normalization Method for Generalizable Multi-source Person
Re-identification [49.548815417844786]
Person re-identification (Re-ID) has achieved great success in the supervised scenario.
It is difficult to directly transfer the supervised model to arbitrary unseen domains due to the model overfitting to the seen source domains.
We propose MixNorm, which consists of domain-aware mix-normalization (DMN) and domain-ware center regularization (DCR)
arXiv Detail & Related papers (2022-01-24T18:09:38Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z) - Semi-Supervised Domain Generalizable Person Re-Identification [74.75528879336576]
Existing person re-identification (re-id) methods are stuck when deployed to a new unseen scenario.
Recent efforts have been devoted to domain adaptive person re-id where extensive unlabeled data in the new scenario are utilized in a transductive learning manner.
We aim to explore multiple labeled datasets to learn generalized domain-invariant representations for person re-id.
arXiv Detail & Related papers (2021-08-11T06:08:25Z) - Unsupervised Multi-Source Domain Adaptation for Person Re-Identification [39.817734080890695]
Unsupervised domain adaptation (UDA) methods for person re-identification (re-ID) aim at transferring re-ID knowledge from labeled source data to unlabeled target data.
We introduce the multi-source concept into UDA person re-ID field, where multiple source datasets are used during training.
The proposed method outperforms state-of-the-art UDA person re-ID methods by a large margin, and even achieves comparable performance to the supervised approaches without any post-processing techniques.
arXiv Detail & Related papers (2021-04-27T03:33:35Z) - Unsupervised and self-adaptative techniques for cross-domain person
re-identification [82.54691433502335]
Person Re-Identification (ReID) across non-overlapping cameras is a challenging task.
Unsupervised Domain Adaptation (UDA) is a promising alternative, as it performs feature-learning adaptation from a model trained on a source to a target domain without identity-label annotation.
In this paper, we propose a novel UDA-based ReID method that takes advantage of triplets of samples created by a new offline strategy.
arXiv Detail & Related papers (2021-03-21T23:58:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.