Unsupervised Domain Adaptation on Person Re-Identification via
Dual-level Asymmetric Mutual Learning
- URL: http://arxiv.org/abs/2301.12439v1
- Date: Sun, 29 Jan 2023 12:36:17 GMT
- Title: Unsupervised Domain Adaptation on Person Re-Identification via
Dual-level Asymmetric Mutual Learning
- Authors: Qiong Wu, Jiahan Li, Pingyang Dai, Qixiang Ye, Liujuan Cao, Yongjian
Wu, Rongrong Ji
- Abstract summary: This paper proposes a Dual-level Asymmetric Mutual Learning method (DAML) to learn discriminative representations from a broader knowledge scope with diverse embedding spaces.
The knowledge transfer between two networks is based on an asymmetric mutual learning manner.
Experiments in Market-1501, CUHK-SYSU, and MSMT17 public datasets verified the superiority of DAML over state-of-the-arts.
- Score: 108.86940401125649
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation person re-identification (Re-ID) aims to
identify pedestrian images within an unlabeled target domain with an auxiliary
labeled source-domain dataset. Many existing works attempt to recover reliable
identity information by considering multiple homogeneous networks. And take
these generated labels to train the model in the target domain. However, these
homogeneous networks identify people in approximate subspaces and equally
exchange their knowledge with others or their mean net to improve their
ability, inevitably limiting the scope of available knowledge and putting them
into the same mistake. This paper proposes a Dual-level Asymmetric Mutual
Learning method (DAML) to learn discriminative representations from a broader
knowledge scope with diverse embedding spaces. Specifically, two heterogeneous
networks mutually learn knowledge from asymmetric subspaces through the pseudo
label generation in a hard distillation manner. The knowledge transfer between
two networks is based on an asymmetric mutual learning manner. The teacher
network learns to identify both the target and source domain while adapting to
the target domain distribution based on the knowledge of the student.
Meanwhile, the student network is trained on the target dataset and employs the
ground-truth label through the knowledge of the teacher. Extensive experiments
in Market-1501, CUHK-SYSU, and MSMT17 public datasets verified the superiority
of DAML over state-of-the-arts.
Related papers
- Direct Distillation between Different Domains [97.39470334253163]
We propose a new one-stage method dubbed Direct Distillation between Different Domains" (4Ds)
We first design a learnable adapter based on the Fourier transform to separate the domain-invariant knowledge from the domain-specific knowledge.
We then build a fusion-activation mechanism to transfer the valuable domain-invariant knowledge to the student network.
arXiv Detail & Related papers (2024-01-12T02:48:51Z) - Pulling Target to Source: A New Perspective on Domain Adaptive Semantic Segmentation [80.1412989006262]
Domain adaptive semantic segmentation aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
We propose T2S-DA, which we interpret as a form of pulling Target to Source for Domain Adaptation.
arXiv Detail & Related papers (2023-05-23T07:09:09Z) - Joint Semantic Transfer Network for IoT Intrusion Detection [25.937401774982614]
We propose a Joint Semantic Transfer Network (JSTN) towards effective intrusion detection for large-scale scarcely labelled IoT domain.
As a multi-source heterogeneous domain adaptation (MS-HDA) method, the JSTN integrates a knowledge rich network intrusion (NI) domain and another small-scale IoT intrusion (II) domain as source domains.
The JSTN jointly transfers the following three semantics to learn a domain-invariant and discriminative feature representation.
arXiv Detail & Related papers (2022-10-28T05:34:28Z) - Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation [51.21190751266442]
Domain adaptation (DA) tries to tackle the scenarios when the test data does not fully follow the same distribution of the training data.
By learning from large-scale unlabeled samples, self-supervised learning has now become a new trend in deep learning.
We propose a novel textbfSelf-textbfSupervised textbfGraph Neural Network (SSG) to enable more effective inter-task information exchange and knowledge sharing.
arXiv Detail & Related papers (2022-04-08T03:37:56Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain
Gait Recognition [77.77786072373942]
This paper proposes a Transferable Neighborhood Discovery (TraND) framework to bridge the domain gap for unsupervised cross-domain gait recognition.
We design an end-to-end trainable approach to automatically discover the confident neighborhoods of unlabeled samples in the latent space.
Our method achieves state-of-the-art results on two public datasets, i.e., CASIA-B and OU-LP.
arXiv Detail & Related papers (2021-02-09T03:07:07Z) - Teacher-Student Consistency For Multi-Source Domain Adaptation [28.576613317253035]
In Multi-Source Domain Adaptation (MSDA), models are trained on samples from multiple source domains and used for inference on a different, target, domain.
We propose Multi-source Student Teacher (MUST), a novel procedure designed to alleviate these issues.
arXiv Detail & Related papers (2020-10-20T06:17:40Z) - Dual Adversarial Domain Adaptation [6.69797982848003]
Unsupervised domain adaptation aims at transferring knowledge from the labeled source domain to the unlabeled target domain.
Recent experiments have shown that when the discriminator is provided with domain information in both domains, it is able to preserve the complex multimodal information.
We adopt a discriminator with $2K$-dimensional output to perform both domain-level and class-level alignments simultaneously in a single discriminator.
arXiv Detail & Related papers (2020-01-01T07:10:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.