Towards Fair Knowledge Transfer for Imbalanced Domain Adaptation
- URL: http://arxiv.org/abs/2010.12184v3
- Date: Sun, 10 Oct 2021 22:28:08 GMT
- Title: Towards Fair Knowledge Transfer for Imbalanced Domain Adaptation
- Authors: Taotao Jing, Bingrong Xu, Jingjing Li, Zhengming Ding
- Abstract summary: We propose a Towards Fair Knowledge Transfer framework to handle the fairness challenge in imbalanced cross-domain learning.
Specifically, a novel cross-domain mixup generation is exploited to augment the minority source set with target information to enhance fairness.
Our model significantly improves over 20% on two benchmarks in terms of the overall accuracy.
- Score: 61.317911756566126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain adaptation (DA) becomes an up-and-coming technique to address the
insufficient or no annotation issue by exploiting external source knowledge.
Existing DA algorithms mainly focus on practical knowledge transfer through
domain alignment. Unfortunately, they ignore the fairness issue when the
auxiliary source is extremely imbalanced across different categories, which
results in severe under-presented knowledge adaptation of minority source set.
To this end, we propose a Towards Fair Knowledge Transfer (TFKT) framework to
handle the fairness challenge in imbalanced cross-domain learning.
Specifically, a novel cross-domain mixup generation is exploited to augment the
minority source set with target information to enhance fairness. Moreover, dual
distinct classifiers and cross-domain prototype alignment are developed to seek
a more robust classifier boundary and mitigate the domain shift. Such three
strategies are formulated into a unified framework to address the fairness
issue and domain shift challenge. Extensive experiments over two popular
benchmarks have verified the effectiveness of our proposed model by comparing
to existing state-of-the-art DA models, and especially our model significantly
improves over 20% on two benchmarks in terms of the overall accuracy.
Related papers
- Regularized Conditional Alignment for Multi-Domain Text Classification [6.629561563470492]
We propose a method called Regularized Conditional Alignment (RCA) to align the joint distributions of domains and classes.
We employ entropy minimization and virtual adversarial training to constrain the uncertainty of predictions pertaining to unlabeled data.
Empirical results on two benchmark datasets demonstrate that our RCA approach outperforms state-of-the-art MDTC techniques.
arXiv Detail & Related papers (2023-12-18T05:52:05Z) - CAusal and collaborative proxy-tasKs lEarning for Semi-Supervised Domain
Adaptation [20.589323508870592]
Semi-supervised domain adaptation (SSDA) adapts a learner to a new domain by effectively utilizing source domain data and a few labeled target samples.
We show that the proposed model significantly outperforms SOTA methods in terms of effectiveness and generalisability on SSDA datasets.
arXiv Detail & Related papers (2023-03-30T16:48:28Z) - Joint Attention-Driven Domain Fusion and Noise-Tolerant Learning for
Multi-Source Domain Adaptation [2.734665397040629]
Multi-source Unsupervised Domain Adaptation transfers knowledge from multiple source domains with labeled data to an unlabeled target domain.
The distribution discrepancy between different domains and the noisy pseudo-labels in the target domain both lead to performance bottlenecks.
We propose an approach that integrates Attention-driven Domain fusion and Noise-Tolerant learning (ADNT) to address the two issues mentioned above.
arXiv Detail & Related papers (2022-08-05T01:08:41Z) - Balancing Discriminability and Transferability for Source-Free Domain
Adaptation [55.143687986324935]
Conventional domain adaptation (DA) techniques aim to improve domain transferability by learning domain-invariant representations.
The requirement of simultaneous access to labeled source and unlabeled target renders them unsuitable for the challenging source-free DA setting.
We derive novel insights to show that a mixup between original and corresponding translated generic samples enhances the discriminability-transferability trade-off.
arXiv Detail & Related papers (2022-06-16T09:06:22Z) - Coarse to Fine: Domain Adaptive Crowd Counting via Adversarial Scoring
Network [58.05473757538834]
This paper proposes a novel adversarial scoring network (ASNet) to bridge the gap across domains from coarse to fine granularity.
Three sets of migration experiments show that the proposed methods achieve state-of-the-art counting performance.
arXiv Detail & Related papers (2021-07-27T14:47:24Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.