Unsupervised Multi-Source Domain Adaptation for Person Re-Identification
- URL: http://arxiv.org/abs/2104.12961v1
- Date: Tue, 27 Apr 2021 03:33:35 GMT
- Title: Unsupervised Multi-Source Domain Adaptation for Person Re-Identification
- Authors: Zechen Bai, Zhigang Wang, Jian Wang, Di Hu, Errui Ding
- Abstract summary: Unsupervised domain adaptation (UDA) methods for person re-identification (re-ID) aim at transferring re-ID knowledge from labeled source data to unlabeled target data.
We introduce the multi-source concept into UDA person re-ID field, where multiple source datasets are used during training.
The proposed method outperforms state-of-the-art UDA person re-ID methods by a large margin, and even achieves comparable performance to the supervised approaches without any post-processing techniques.
- Score: 39.817734080890695
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Unsupervised domain adaptation (UDA) methods for person re-identification
(re-ID) aim at transferring re-ID knowledge from labeled source data to
unlabeled target data. Although achieving great success, most of them only use
limited data from a single-source domain for model pre-training, making the
rich labeled data insufficiently exploited. To make full use of the valuable
labeled data, we introduce the multi-source concept into UDA person re-ID
field, where multiple source datasets are used during training. However,
because of domain gaps, simply combining different datasets only brings limited
improvement. In this paper, we try to address this problem from two
perspectives, \ie{} domain-specific view and domain-fusion view. Two
constructive modules are proposed, and they are compatible with each other.
First, a rectification domain-specific batch normalization (RDSBN) module is
explored to simultaneously reduce domain-specific characteristics and increase
the distinctiveness of person features. Second, a graph convolutional network
(GCN) based multi-domain information fusion (MDIF) module is developed, which
minimizes domain distances by fusing features of different domains. The
proposed method outperforms state-of-the-art UDA person re-ID methods by a
large margin, and even achieves comparable performance to the supervised
approaches without any post-processing techniques.
Related papers
- Subject-Based Domain Adaptation for Facial Expression Recognition [51.10374151948157]
Adapting a deep learning model to a specific target individual is a challenging facial expression recognition task.
This paper introduces a new MSDA method for subject-based domain adaptation in FER.
It efficiently leverages information from multiple source subjects to adapt a deep FER model to a single target individual.
arXiv Detail & Related papers (2023-12-09T18:40:37Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z) - Unsupervised Domain Adaptive Learning via Synthetic Data for Person
Re-identification [101.1886788396803]
Person re-identification (re-ID) has gained more and more attention due to its widespread applications in video surveillance.
Unfortunately, the mainstream deep learning methods still need a large quantity of labeled data to train models.
In this paper, we develop a data collector to automatically generate synthetic re-ID samples in a computer game, and construct a data labeler to simultaneously annotate them.
arXiv Detail & Related papers (2021-09-12T15:51:41Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Multi-Domain Adversarial Feature Generalization for Person
Re-Identification [52.835955258959785]
We propose a multi-dataset feature generalization network (MMFA-AAE)
It is capable of learning a universal domain-invariant feature representation from multiple labeled datasets and generalizing it to unseen' camera systems.
It also surpasses many state-of-the-art supervised methods and unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2020-11-25T08:03:15Z) - Unified Multi-Domain Learning and Data Imputation using Adversarial
Autoencoder [5.933303832684138]
We present a novel framework that can combine multi-domain learning (MDL), data imputation (DI) and multi-task learning (MTL)
The core of our method is an adversarial autoencoder that can: (1) learn to produce domain-invariant embeddings to reduce the difference between domains; (2) learn the data distribution for each domain and correctly perform data imputation on missing data.
arXiv Detail & Related papers (2020-03-15T19:55:07Z) - Multi-source Domain Adaptation for Visual Sentiment Classification [92.53780541232773]
We propose a novel multi-source domain adaptation (MDA) method, termed Multi-source Sentiment Generative Adversarial Network (MSGAN)
To handle data from multiple source domains, MSGAN learns to find a unified sentiment latent space where data from both the source and target domains share a similar distribution.
Extensive experiments conducted on four benchmark datasets demonstrate that MSGAN significantly outperforms the state-of-the-art MDA approaches for visual sentiment classification.
arXiv Detail & Related papers (2020-01-12T08:37:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.