1st Place Solution to VisDA-2020: Bias Elimination for Domain Adaptive
Pedestrian Re-identification
- URL: http://arxiv.org/abs/2012.13498v1
- Date: Fri, 25 Dec 2020 03:02:46 GMT
- Title: 1st Place Solution to VisDA-2020: Bias Elimination for Domain Adaptive
Pedestrian Re-identification
- Authors: Jianyang Gu, Hao Luo, Weihua Chen, Yiqi Jiang, Yuqi Zhang, Shuting He,
Fan Wang, Hao Li, Wei Jiang
- Abstract summary: This paper presents our proposed methods for domain adaptive pedestrian re-identification (Re-ID) task in Visual Domain Adaptation Challenge (VisDA-2020)
Considering the large gap between the source domain and target domain, we focused on solving two biases that influenced the performance on domain adaptive pedestrian Re-ID.
Our methods achieve 76.56% mAP and 84.25% rank-1 on the test set.
- Score: 17.065458476210175
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents our proposed methods for domain adaptive pedestrian
re-identification (Re-ID) task in Visual Domain Adaptation Challenge
(VisDA-2020). Considering the large gap between the source domain and target
domain, we focused on solving two biases that influenced the performance on
domain adaptive pedestrian Re-ID and proposed a two-stage training procedure.
At the first stage, a baseline model is trained with images transferred from
source domain to target domain and from single camera to multiple camera
styles. Then we introduced a domain adaptation framework to train the model on
source data and target data simultaneously. Different pseudo label generation
strategies are adopted to continuously improve the discriminative ability of
the model. Finally, with multiple models ensembled and additional post
processing approaches adopted, our methods achieve 76.56% mAP and 84.25% rank-1
on the test set. Codes are available at
https://github.com/vimar-gu/Bias-Eliminate-DA-ReID
Related papers
- BlenDA: Domain Adaptive Object Detection through diffusion-based
blending [10.457759140533168]
Unsupervised domain adaptation (UDA) aims to transfer a model learned using labeled data from the source domain to unlabeled data in the target domain.
We propose a novel regularization method for domain adaptive object detection, BlenDA, by generating the pseudo samples of the intermediate domains.
We achieve an impressive 53.4% mAP on the Foggy Cityscapes dataset, surpassing the previous state-of-the-art by 1.5%.
arXiv Detail & Related papers (2024-01-18T12:07:39Z) - Spectral Adversarial MixUp for Few-Shot Unsupervised Domain Adaptation [72.70876977882882]
Domain shift is a common problem in clinical applications, where the training images (source domain) and the test images (target domain) are under different distributions.
We propose a novel method for Few-Shot Unsupervised Domain Adaptation (FSUDA), where only a limited number of unlabeled target domain samples are available for training.
arXiv Detail & Related papers (2023-09-03T16:02:01Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - 1st Place Solution to NeurIPS 2022 Challenge on Visual Domain Adaptation [4.06040510836545]
We introduce the SIA_Adapt method, which incorporates several methods for domain adaptive models.
Our method achieves 1st place in the VisDA2022 challenge.
arXiv Detail & Related papers (2022-11-26T15:45:31Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Target and Task specific Source-Free Domain Adaptive Image Segmentation [73.78898054277538]
We propose a two-stage approach for source-free domain adaptive image segmentation.
We focus on generating target-specific pseudo labels while suppressing high entropy regions.
In the second stage, we focus on adapting the network for task-specific representation.
arXiv Detail & Related papers (2022-03-29T17:50:22Z) - 2nd Place Solution for VisDA 2021 Challenge -- Universally Domain
Adaptive Image Recognition [38.54810374543916]
We introduce a universal domain adaptation (UniDA) method by aggregating several popular feature extraction and domain adaptation schemes.
As shown in the leaderboard, our proposed UniDA method ranks the 2nd place with 48.56% ACC and 70.72% AUROC in the VisDA 2021 Challenge.
arXiv Detail & Related papers (2021-10-27T07:48:29Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Effective Label Propagation for Discriminative Semi-Supervised Domain
Adaptation [76.41664929948607]
Semi-supervised domain adaptation (SSDA) methods have demonstrated great potential in large-scale image classification tasks.
We present a novel and effective method to tackle this problem by using effective inter-domain and intra-domain semantic information propagation.
Our source code and pre-trained models will be released soon.
arXiv Detail & Related papers (2020-12-04T14:28:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.