2nd Place Solution for VisDA 2021 Challenge -- Universally Domain
Adaptive Image Recognition
- URL: http://arxiv.org/abs/2110.14240v1
- Date: Wed, 27 Oct 2021 07:48:29 GMT
- Title: 2nd Place Solution for VisDA 2021 Challenge -- Universally Domain
Adaptive Image Recognition
- Authors: Haojin Liao, Xiaolin Song, Sicheng Zhao, Shanghang Zhang, Xiangyu Yue,
Xingxu Yao, Yueming Zhang, Tengfei Xing, Pengfei Xu, Qiang Wang
- Abstract summary: We introduce a universal domain adaptation (UniDA) method by aggregating several popular feature extraction and domain adaptation schemes.
As shown in the leaderboard, our proposed UniDA method ranks the 2nd place with 48.56% ACC and 70.72% AUROC in the VisDA 2021 Challenge.
- Score: 38.54810374543916
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Visual Domain Adaptation (VisDA) 2021 Challenge calls for unsupervised
domain adaptation (UDA) methods that can deal with both input distribution
shift and label set variance between the source and target domains. In this
report, we introduce a universal domain adaptation (UniDA) method by
aggregating several popular feature extraction and domain adaptation schemes.
First, we utilize VOLO, a Transformer-based architecture with state-of-the-art
performance in several visual tasks, as the backbone to extract effective
feature representations. Second, we modify the open-set classifier of OVANet to
recognize the unknown class with competitive accuracy and robustness. As shown
in the leaderboard, our proposed UniDA method ranks the 2nd place with 48.56%
ACC and 70.72% AUROC in the VisDA 2021 Challenge.
Related papers
- Improving Domain Adaptation Through Class Aware Frequency Transformation [15.70058524548143]
Most of the Unsupervised Domain Adaptation (UDA) algorithms focus on reducing the global domain shift between labelled source and unlabelled target domains.
We propose a novel approach based on traditional image processing technique Class Aware Frequency Transformation (CAFT)
CAFT utilizes pseudo label based class consistent low-frequency swapping for improving the overall performance of the existing UDA algorithms.
arXiv Detail & Related papers (2024-07-28T18:16:41Z) - MLNet: Mutual Learning Network with Neighborhood Invariance for
Universal Domain Adaptation [70.62860473259444]
Universal domain adaptation (UniDA) is a practical but challenging problem.
Existing UniDA methods may suffer from the problems of overlooking intra-domain variations in the target domain.
We propose a novel Mutual Learning Network (MLNet) with neighborhood invariance for UniDA.
arXiv Detail & Related papers (2023-12-13T03:17:34Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - Divide and Adapt: Active Domain Adaptation via Customized Learning [56.79144758380419]
We present Divide-and-Adapt (DiaNA), a new ADA framework that partitions the target instances into four categories with stratified transferable properties.
With a novel data subdivision protocol based on uncertainty and domainness, DiaNA can accurately recognize the most gainful samples.
Thanks to the "divideand-adapt" spirit, DiaNA can handle data with large variations of domain gap.
arXiv Detail & Related papers (2023-07-21T14:37:17Z) - MEnsA: Mix-up Ensemble Average for Unsupervised Multi Target Domain
Adaptation on 3D Point Clouds [9.568577396815602]
Unlabelled domain adaptation (UDA) addresses the problem of distribution shift between the unsupervised target domain and labelled source domain.
We propose to mix the feature representations from all domains together to achieve better domain adaptation performance by an ensemble average.
With the mixed representation, we use a domain classifier to improve at distinguishing the feature representations of source domain from those of target domains in a shared latent space.
arXiv Detail & Related papers (2023-04-04T06:13:33Z) - 1st Place Solution to NeurIPS 2022 Challenge on Visual Domain Adaptation [4.06040510836545]
We introduce the SIA_Adapt method, which incorporates several methods for domain adaptive models.
Our method achieves 1st place in the VisDA2022 challenge.
arXiv Detail & Related papers (2022-11-26T15:45:31Z) - Pre-Training Transformers for Domain Adaptation [0.0]
In this paper, we demonstrate our capability of capturing key attributes from source datasets and apply it to target datasets in a semi-supervised manner.
Our method was able to outperform current state-of-the-art (SoTA) techniques and was able to achieve 1st place on the ViSDA Domain Adaptation Challenge with ACC of 56.29% and AUROC of 69.79%.
arXiv Detail & Related papers (2021-12-18T16:52:48Z) - CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation [44.06904757181245]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a labeled source domain to a different unlabeled target domain.
One fundamental problem for the category level based UDA is the production of pseudo labels for samples in target domain.
We design a two-way center-aware labeling algorithm to produce pseudo labels for target samples.
Along with the pseudo labels, a weight-sharing triple-branch transformer framework is proposed to apply self-attention and cross-attention for source/target feature learning and source-target domain alignment.
arXiv Detail & Related papers (2021-09-13T17:59:07Z) - VisDA-2021 Competition Universal Domain Adaptation to Improve
Performance on Out-of-Distribution Data [64.91713686654805]
The Visual Domain Adaptation (VisDA) 2021 competition tests models' ability to adapt to novel test distributions.
We will evaluate adaptation to novel viewpoints, backgrounds, modalities and degradation in quality.
Performance will be measured using a rigorous protocol, comparing to state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-07-23T03:21:51Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - 1st Place Solution to VisDA-2020: Bias Elimination for Domain Adaptive
Pedestrian Re-identification [17.065458476210175]
This paper presents our proposed methods for domain adaptive pedestrian re-identification (Re-ID) task in Visual Domain Adaptation Challenge (VisDA-2020)
Considering the large gap between the source domain and target domain, we focused on solving two biases that influenced the performance on domain adaptive pedestrian Re-ID.
Our methods achieve 76.56% mAP and 84.25% rank-1 on the test set.
arXiv Detail & Related papers (2020-12-25T03:02:46Z) - Effective Label Propagation for Discriminative Semi-Supervised Domain
Adaptation [76.41664929948607]
Semi-supervised domain adaptation (SSDA) methods have demonstrated great potential in large-scale image classification tasks.
We present a novel and effective method to tackle this problem by using effective inter-domain and intra-domain semantic information propagation.
Our source code and pre-trained models will be released soon.
arXiv Detail & Related papers (2020-12-04T14:28:19Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.