Boosting Novel Category Discovery Over Domains with Soft Contrastive
Learning and All-in-One Classifier
- URL: http://arxiv.org/abs/2211.11262v3
- Date: Sun, 23 Jul 2023 07:18:08 GMT
- Title: Boosting Novel Category Discovery Over Domains with Soft Contrastive
Learning and All-in-One Classifier
- Authors: Zelin Zang, Lei Shang, Senqiao Yang, Fei Wang, Baigui Sun, Xuansong
Xie, Stan Z. Li
- Abstract summary: Unsupervised domain adaptation (UDA) has proven to be highly effective in transferring knowledge from a label-rich source domain to a label-scarce target domain.
The presence of additional novel categories in the target domain has led to the development of open-set domain adaptation (ODA) and universal domain adaptation (UNDA)
A framework named Soft-contrastive All-in-one Network (SAN) is proposed for ODA and UNDA tasks.
- Score: 36.821743383552864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation (UDA) has proven to be highly effective in
transferring knowledge from a label-rich source domain to a label-scarce target
domain. However, the presence of additional novel categories in the target
domain has led to the development of open-set domain adaptation (ODA) and
universal domain adaptation (UNDA). Existing ODA and UNDA methods treat all
novel categories as a single, unified unknown class and attempt to detect it
during training. However, we found that domain variance can lead to more
significant view-noise in unsupervised data augmentation, which affects the
effectiveness of contrastive learning (CL) and causes the model to be
overconfident in novel category discovery. To address these issues, a framework
named Soft-contrastive All-in-one Network (SAN) is proposed for ODA and UNDA
tasks. SAN includes a novel data-augmentation-based soft contrastive learning
(SCL) loss to fine-tune the backbone for feature transfer and a more
human-intuitive classifier to improve new class discovery capability. The SCL
loss weakens the adverse effects of the data augmentation view-noise problem
which is amplified in domain transfer tasks. The All-in-One (AIO) classifier
overcomes the overconfidence problem of current mainstream closed-set and
open-set classifiers. Visualization and ablation experiments demonstrate the
effectiveness of the proposed innovations. Furthermore, extensive experiment
results on ODA and UNDA show that SAN outperforms existing state-of-the-art
methods.
Related papers
- EIANet: A Novel Domain Adaptation Approach to Maximize Class Distinction with Neural Collapse Principles [15.19374752514876]
Source-free domain adaptation (SFDA) aims to transfer knowledge from a labelled source domain to an unlabelled target domain.
A major challenge in SFDA is deriving accurate categorical information for the target domain.
We introduce a novel ETF-Informed Attention Network (EIANet) to separate class prototypes.
arXiv Detail & Related papers (2024-07-23T05:31:05Z) - Disentangling Masked Autoencoders for Unsupervised Domain Generalization [57.56744870106124]
Unsupervised domain generalization is fast gaining attention but is still far from well-studied.
Disentangled Masked Auto (DisMAE) aims to discover the disentangled representations that faithfully reveal intrinsic features.
DisMAE co-trains the asymmetric dual-branch architecture with semantic and lightweight variation encoders.
arXiv Detail & Related papers (2024-07-10T11:11:36Z) - Uncertainty-guided Open-Set Source-Free Unsupervised Domain Adaptation with Target-private Class Segregation [22.474866164542302]
UDA approaches commonly assume that source and target domains share the same labels space.
This paper considers the more challenging Source-Free Open-set Domain Adaptation (SF-OSDA) setting.
We propose a novel approach for SF-OSDA that exploits the granularity of target-private categories by segregating their samples into multiple unknown classes.
arXiv Detail & Related papers (2024-04-16T13:52:00Z) - MLNet: Mutual Learning Network with Neighborhood Invariance for
Universal Domain Adaptation [70.62860473259444]
Universal domain adaptation (UniDA) is a practical but challenging problem.
Existing UniDA methods may suffer from the problems of overlooking intra-domain variations in the target domain.
We propose a novel Mutual Learning Network (MLNet) with neighborhood invariance for UniDA.
arXiv Detail & Related papers (2023-12-13T03:17:34Z) - Unsupervised Adaptation of Polyp Segmentation Models via Coarse-to-Fine
Self-Supervision [16.027843524655516]
We study a practical problem of Source-Free Domain Adaptation (SFDA), which eliminates the reliance on annotated source data.
Current SFDA methods focus on extracting domain knowledge from the source-trained model but neglects the intrinsic structure of the target domain.
We propose a new SFDA framework, called Region-to-Pixel Adaptation Network(RPANet), which learns the region-level and pixel-level discriminative representations through coarse-to-fine self-supervision.
arXiv Detail & Related papers (2023-08-13T02:37:08Z) - Upcycling Models under Domain and Category Shift [95.22147885947732]
We introduce an innovative global and local clustering learning technique (GLC)
We design a novel, adaptive one-vs-all global clustering algorithm to achieve the distinction across different target classes.
Remarkably, in the most challenging open-partial-set DA scenario, GLC outperforms UMAD by 14.8% on the VisDA benchmark.
arXiv Detail & Related papers (2023-03-13T13:44:04Z) - Imbalanced Open Set Domain Adaptation via Moving-threshold Estimation
and Gradual Alignment [58.56087979262192]
Open Set Domain Adaptation (OSDA) aims to transfer knowledge from a well-labeled source domain to an unlabeled target domain.
The performance of OSDA methods degrades drastically under intra-domain class imbalance and inter-domain label shift.
We propose Open-set Moving-threshold Estimation and Gradual Alignment (OMEGA) to alleviate the negative effects raised by label shift.
arXiv Detail & Related papers (2023-03-08T05:55:02Z) - Crucial Semantic Classifier-based Adversarial Learning for Unsupervised
Domain Adaptation [4.6899218408452885]
Unsupervised Domain Adaptation (UDA) aims to explore the transferrable from a well-labeled source domain to a related unlabeled target domain.
We propose Crucial Semantic-based Adrial Learning (CSCAL) to pay more attention to crucial semantic knowledge transferring.
CSCAL can be effortlessly merged into different UDA methods as a regularizer and dramatically promote their performance.
arXiv Detail & Related papers (2023-02-03T13:06:14Z) - Unsupervised Domain Adaptation via Style-Aware Self-intermediate Domain [52.783709712318405]
Unsupervised domain adaptation (UDA) has attracted considerable attention, which transfers knowledge from a label-rich source domain to a related but unlabeled target domain.
We propose a novel style-aware feature fusion method (SAFF) to bridge the large domain gap and transfer knowledge while alleviating the loss of class-discnative information.
arXiv Detail & Related papers (2022-09-05T10:06:03Z) - Shuffle Augmentation of Features from Unlabeled Data for Unsupervised
Domain Adaptation [21.497019000131917]
Unsupervised Domain Adaptation (UDA) is a branch of transfer learning where labels for target samples are unavailable.
In this paper, we propose Shuffle Augmentation of Features (SAF) as a novel UDA framework.
SAF learns from the target samples, adaptively distills class-aware target features, and implicitly guides the classifier to find comprehensive class borders.
arXiv Detail & Related papers (2022-01-28T07:11:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.