Contrastive Bi-Projector for Unsupervised Domain Adaption
- URL: http://arxiv.org/abs/2308.07017v2
- Date: Sat, 30 Dec 2023 01:31:51 GMT
- Title: Contrastive Bi-Projector for Unsupervised Domain Adaption
- Authors: Lin-Chieh Huang, Hung-Hsu Tsai
- Abstract summary: This paper proposes a novel unsupervised domain adaption (UDA) method based on contrastive bi-projector (CBP)
It is called CBPUDA here, which effectively promotes the feature extractors (FEs) to reduce the generation of ambiguous features for classification and domain adaption.
Experimental results express that the CBPUDA is superior to conventional UDA methods under consideration in this paper for UDA and fine-grained UDA tasks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a novel unsupervised domain adaption (UDA) method based
on contrastive bi-projector (CBP), which can improve the existing UDA methods.
It is called CBPUDA here, which effectively promotes the feature extractors
(FEs) to reduce the generation of ambiguous features for classification and
domain adaption. The CBP differs from traditional bi-classifier-based methods
at that these two classifiers are replaced with two projectors of performing a
mapping from the input feature to two distinct features. These two projectors
and the FEs in the CBPUDA can be trained adversarially to obtain more refined
decision boundaries so that it can possess powerful classification performance.
Two properties of the proposed loss function are analyzed here. The first
property is to derive an upper bound of joint prediction entropy, which is used
to form the proposed loss function, contrastive discrepancy (CD) loss. The CD
loss takes the advantages of the contrastive learning and the bi-classifier.
The second property is to analyze the gradient of the CD loss and then overcome
the drawback of the CD loss. The result of the second property is utilized in
the development of the gradient scaling (GS) scheme in this paper. The GS
scheme can be exploited to tackle the unstable problem of the CD loss because
training the CBPUDA requires using contrastive learning and adversarial
learning at the same time. Therefore, using the CD loss with the GS scheme
overcomes the problem mentioned above to make features more compact for
intra-class and distinguishable for inter-class. Experimental results express
that the CBPUDA is superior to conventional UDA methods under consideration in
this paper for UDA and fine-grained UDA tasks.
Related papers
- A Study on Adversarial Robustness of Discriminative Prototypical Learning [0.24999074238880484]
We propose a novel adversarial training framework named Adversarial Deep Positive-Negative Prototypes (Adv-DPNP)
Adv-DPNP integrates disriminative prototype-based learning with adversarial training.
Our approach utilizes a composite loss function combining positive prototype alignment, negative prototype repulsion, and consistency regularization.
arXiv Detail & Related papers (2025-04-03T15:42:58Z) - One Stone, Two Birds: Enhancing Adversarial Defense Through the Lens of Distributional Discrepancy [30.502354813427523]
Statistical adversarial data detection (SADD) detects whether an upcoming batch contains adversarial examples (AEs)<n>We propose a two-pronged adversarial defense method, named Distributional-discrepancy-based Adversarial Defense (DAD)
arXiv Detail & Related papers (2025-03-04T01:16:21Z) - Self-degraded contrastive domain adaptation for industrial fault diagnosis with bi-imbalanced data [7.6544734853901035]
We propose a self-degraded contrastive domain adaptation framework to handle the domain discrepancy under the bi-imbalanced data.
It first pre-trains the feature extractor via imbalance-aware contrastive learning based on model pruning.
Then it forces the samples away from the domain boundary based on supervised contrastive domain adversarial learning.
arXiv Detail & Related papers (2024-05-31T08:51:57Z) - How Useful is Continued Pre-Training for Generative Unsupervised Domain Adaptation? [23.454153602068786]
We evaluate the utility of Continued Pre-Training (CPT) for generative UDA.
Our findings suggest that a implicitly learns the downstream task while predicting masked words informative to that task.
arXiv Detail & Related papers (2024-01-31T00:15:34Z) - Deep Metric Learning for Unsupervised Remote Sensing Change Detection [60.89777029184023]
Remote Sensing Change Detection (RS-CD) aims to detect relevant changes from Multi-Temporal Remote Sensing Images (MT-RSIs)
The performance of existing RS-CD methods is attributed to training on large annotated datasets.
This paper proposes an unsupervised CD method based on deep metric learning that can deal with both of these issues.
arXiv Detail & Related papers (2023-03-16T17:52:45Z) - Discriminator-free Unsupervised Domain Adaptation for Multi-label Image
Classification [11.825795835537324]
A discriminator-free adversarial-based Unminator Domain Adaptation (UDA) for Multi-Label Image Classification (MLIC) is proposed.
The proposed method is evaluated on several multi-label image datasets covering three different types of domain shift.
arXiv Detail & Related papers (2023-01-25T14:45:13Z) - The KFIoU Loss for Rotated Object Detection [115.334070064346]
In this paper, we argue that one effective alternative is to devise an approximate loss who can achieve trend-level alignment with SkewIoU loss.
Specifically, we model the objects as Gaussian distribution and adopt Kalman filter to inherently mimic the mechanism of SkewIoU.
The resulting new loss called KFIoU is easier to implement and works better compared with exact SkewIoU.
arXiv Detail & Related papers (2022-01-29T10:54:57Z) - Cross-Site Severity Assessment of COVID-19 from CT Images via Domain
Adaptation [64.59521853145368]
Early and accurate severity assessment of Coronavirus disease 2019 (COVID-19) based on computed tomography (CT) images offers a great help to the estimation of intensive care unit event.
To augment the labeled data and improve the generalization ability of the classification model, it is necessary to aggregate data from multiple sites.
This task faces several challenges including class imbalance between mild and severe infections, domain distribution discrepancy between sites, and presence of heterogeneous features.
arXiv Detail & Related papers (2021-09-08T07:56:51Z) - Channel DropBlock: An Improved Regularization Method for Fine-Grained
Visual Classification [58.07257910065007]
Existing approaches mainly tackle this problem by introducing attention mechanisms to locate the discriminative parts or feature encoding approaches to extract the highly parameterized features in a weakly-supervised fashion.
In this work, we propose a lightweight yet effective regularization method named Channel DropBlock (CDB) in combination with two alternative correlation metrics, to address this problem.
arXiv Detail & Related papers (2021-06-07T09:03:02Z) - Margin Preserving Self-paced Contrastive Learning Towards Domain
Adaptation for Medical Image Segmentation [51.93711960601973]
We propose a novel margin preserving self-paced contrastive Learning model for cross-modal medical image segmentation.
With the guidance of progressively refined semantic prototypes, a novel margin preserving contrastive loss is proposed to boost the discriminability of embedded representation space.
Experiments on cross-modal cardiac segmentation tasks demonstrate that MPSCL significantly improves semantic segmentation performance.
arXiv Detail & Related papers (2021-03-15T15:23:10Z) - A Generalized Kernel Risk Sensitive Loss for Robust Two-Dimensional
Singular Value Decomposition [11.234115388848283]
Two-dimensional singular decomposition (2DSVD) has been widely used for image processing tasks, such as image reconstruction, classification, and clustering.
Traditional 2DSVD is based on the mean square error (MSE) loss, which is sensitive to outliers.
We propose a robustDSVD based on a generalized kernel risk of noise and outliers.
arXiv Detail & Related papers (2020-05-10T14:02:40Z) - Simple and Effective Prevention of Mode Collapse in Deep One-Class
Classification [93.2334223970488]
We propose two regularizers to prevent hypersphere collapse in deep SVDD.
The first regularizer is based on injecting random noise via the standard cross-entropy loss.
The second regularizer penalizes the minibatch variance when it becomes too small.
arXiv Detail & Related papers (2020-01-24T03:44:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.