centroIDA: Cross-Domain Class Discrepancy Minimization Based on
Accumulative Class-Centroids for Imbalanced Domain Adaptation
- URL: http://arxiv.org/abs/2308.10619v1
- Date: Mon, 21 Aug 2023 10:35:32 GMT
- Title: centroIDA: Cross-Domain Class Discrepancy Minimization Based on
Accumulative Class-Centroids for Imbalanced Domain Adaptation
- Authors: Xiaona Sun and Zhenyu Wu and Yichen Liu and Saier Hu and Zhiqiang Zhan
and Yang Ji
- Abstract summary: We propose a cross-domain class discrepancy minimization method based on accumulative class-centroids for IDA (centroIDA)
A series of experiments have proved that our method outperforms other SOTA methods on IDA problem, especially with the increasing degree of label shift.
- Score: 17.97306640457707
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised Domain Adaptation (UDA) approaches address the covariate shift
problem by minimizing the distribution discrepancy between the source and
target domains, assuming that the label distribution is invariant across
domains. However, in the imbalanced domain adaptation (IDA) scenario, covariate
and long-tailed label shifts both exist across domains. To tackle the IDA
problem, some current research focus on minimizing the distribution
discrepancies of each corresponding class between source and target domains.
Such methods rely much on the reliable pseudo labels' selection and the feature
distributions estimation for target domain, and the minority classes with
limited numbers makes the estimations more uncertainty, which influences the
model's performance. In this paper, we propose a cross-domain class discrepancy
minimization method based on accumulative class-centroids for IDA (centroIDA).
Firstly, class-based re-sampling strategy is used to obtain an unbiased
classifier on source domain. Secondly, the accumulative class-centroids
alignment loss is proposed for iterative class-centroids alignment across
domains. Finally, class-wise feature alignment loss is used to optimize the
feature representation for a robust classification boundary. A series of
experiments have proved that our method outperforms other SOTA methods on IDA
problem, especially with the increasing degree of label shift.
Related papers
- Polycentric Clustering and Structural Regularization for Source-free
Unsupervised Domain Adaptation [20.952542421577487]
Source-Free Domain Adaptation (SFDA) aims to solve the domain adaptation problem by transferring the knowledge learned from a pre-trained source model to an unseen target domain.
Most existing methods assign pseudo-labels to the target data by generating feature prototypes.
In this paper, a novel framework named PCSR is proposed to tackle SFDA via a novel intra-class Polycentric Clustering and Structural Regularization strategy.
arXiv Detail & Related papers (2022-10-14T02:20:48Z) - Cycle Label-Consistent Networks for Unsupervised Domain Adaptation [57.29464116557734]
Domain adaptation aims to leverage a labeled source domain to learn a classifier for the unlabeled target domain with a different distribution.
We propose a simple yet efficient domain adaptation method, i.e. Cycle Label-Consistent Network (CLCN), by exploiting the cycle consistency of classification label.
We demonstrate the effectiveness of our approach on MNIST-USPS-SVHN, Office-31, Office-Home and Image CLEF-DA benchmarks.
arXiv Detail & Related papers (2022-05-27T13:09:08Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Unsupervised domain adaptation via double classifiers based on high
confidence pseudo label [8.132250810529873]
Unsupervised domain adaptation (UDA) aims to solve the problem of knowledge transfer from labeled source domain to unlabeled target domain.
Many domain adaptation (DA) methods use centroid to align the local distribution of different domains, that is, to align different classes.
This work rethinks what is the alignment between different domains, and studies how to achieve the real alignment between different domains.
arXiv Detail & Related papers (2021-05-11T00:51:31Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z) - Implicit Class-Conditioned Domain Alignment for Unsupervised Domain
Adaptation [18.90240379173491]
Current methods for class-conditioned domain alignment aim to explicitly minimize a loss function based on pseudo-label estimations of the target domain.
We propose a method that removes the need for explicit optimization of model parameters from pseudo-labels directly.
We present a sampling-based implicit alignment approach, where the sample selection procedure is implicitly guided by the pseudo-labels.
arXiv Detail & Related papers (2020-06-09T00:20:21Z) - A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation [142.31610972922067]
This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain.
We build on domain adversarial learning and propose a novel domain adaptation method BA$3$US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS)
Experimental results on multiple benchmarks demonstrate our BA$3$US surpasses state-of-the-arts for partial domain adaptation tasks.
arXiv Detail & Related papers (2020-03-05T11:37:06Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.