Domain Adaptive Attention Learning for Unsupervised Person Re-Identification
- URL: http://arxiv.org/abs/1905.10529v2
- Date: Sat, 15 Jun 2024 03:06:16 GMT
- Title: Domain Adaptive Attention Learning for Unsupervised Person Re-Identification
- Authors: Yangru Huang, Peixi Peng, Yi Jin, Yidong Li, Junliang Xing, Shiming Ge,
- Abstract summary: This paper proposes a domain adaptive attention learning approach to reliably transfer discriminative representation from the labeled source domain to the unlabeled target domain.
Experiments on the Market-1501, DukeMTMC-reID and MSMT17 benchmarks demonstrate the proposed approach outperforms the state-of-the-arts.
- Score: 43.958522138551935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Person re-identification (Re-ID) across multiple datasets is a challenging task due to two main reasons: the presence of large cross-dataset distinctions and the absence of annotated target instances. To address these two issues, this paper proposes a domain adaptive attention learning approach to reliably transfer discriminative representation from the labeled source domain to the unlabeled target domain. In this approach, a domain adaptive attention model is learned to separate the feature map into domain-shared part and domain-specific part. In this manner, the domain-shared part is used to capture transferable cues that can compensate cross-dataset distinctions and give positive contributions to the target task, while the domain-specific part aims to model the noisy information to avoid the negative transfer caused by domain diversity. A soft label loss is further employed to take full use of unlabeled target data by estimating pseudo labels. Extensive experiments on the Market-1501, DukeMTMC-reID and MSMT17 benchmarks demonstrate the proposed approach outperforms the state-of-the-arts.
Related papers
- Joint Attention-Driven Domain Fusion and Noise-Tolerant Learning for
Multi-Source Domain Adaptation [2.734665397040629]
Multi-source Unsupervised Domain Adaptation transfers knowledge from multiple source domains with labeled data to an unlabeled target domain.
The distribution discrepancy between different domains and the noisy pseudo-labels in the target domain both lead to performance bottlenecks.
We propose an approach that integrates Attention-driven Domain fusion and Noise-Tolerant learning (ADNT) to address the two issues mentioned above.
arXiv Detail & Related papers (2022-08-05T01:08:41Z) - Labeling Where Adapting Fails: Cross-Domain Semantic Segmentation with
Point Supervision via Active Selection [81.703478548177]
Training models dedicated to semantic segmentation require a large amount of pixel-wise annotated data.
Unsupervised domain adaptation approaches aim at aligning the feature distributions between the labeled source and the unlabeled target data.
Previous works attempted to include human interactions in this process under the form of sparse single-pixel annotations in the target data.
We propose a new domain adaptation framework for semantic segmentation with annotated points via active selection.
arXiv Detail & Related papers (2022-06-01T01:52:28Z) - Joint Distribution Alignment via Adversarial Learning for Domain
Adaptive Object Detection [11.262560426527818]
Unsupervised domain adaptive object detection aims to adapt a well-trained detector from its original source domain with rich labeled data to a new target domain with unlabeled data.
Recently, mainstream approaches perform this task through adversarial learning, yet still suffer from two limitations.
We propose a joint adaptive detection framework (JADF) to address the above challenges.
arXiv Detail & Related papers (2021-09-19T00:27:08Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Disentanglement-based Cross-Domain Feature Augmentation for Effective
Unsupervised Domain Adaptive Person Re-identification [87.72851934197936]
Unsupervised domain adaptive (UDA) person re-identification (ReID) aims to transfer the knowledge from the labeled source domain to the unlabeled target domain for person matching.
One challenge is how to generate target domain samples with reliable labels for training.
We propose a Disentanglement-based Cross-Domain Feature Augmentation strategy.
arXiv Detail & Related papers (2021-03-25T15:28:41Z) - Select, Label, and Mix: Learning Discriminative Invariant Feature
Representations for Partial Domain Adaptation [55.73722120043086]
We develop a "Select, Label, and Mix" (SLM) framework to learn discriminative invariant feature representations for partial domain adaptation.
First, we present a simple yet efficient "select" module that automatically filters out outlier source samples to avoid negative transfer.
Second, the "label" module iteratively trains the classifier using both the labeled source domain data and the generated pseudo-labels for the target domain to enhance the discriminability of the latent space.
arXiv Detail & Related papers (2020-12-06T19:29:32Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.