Domain-Aware Contrastive Knowledge Transfer for Multi-domain Imbalanced
Data
- URL: http://arxiv.org/abs/2204.01916v1
- Date: Tue, 5 Apr 2022 01:02:53 GMT
- Title: Domain-Aware Contrastive Knowledge Transfer for Multi-domain Imbalanced
Data
- Authors: Zixuan Ke, Mohammad Kachuee, Sungjin Lee
- Abstract summary: We study multi-domain imbalanced learning (MIL), the scenario that there is imbalance not only in classes but also in domains.
In the MIL setting, different domains exhibit different patterns and there is a varying degree of similarity and divergence among domains posing opportunities and challenges for transfer learning.
We propose a novel domain-aware contrastive knowledge transfer method called DCMI to encourage positive transfer among similar domains.
- Score: 23.22953767588902
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In many real-world machine learning applications, samples belong to a set of
domains e.g., for product reviews each review belongs to a product category. In
this paper, we study multi-domain imbalanced learning (MIL), the scenario that
there is imbalance not only in classes but also in domains. In the MIL setting,
different domains exhibit different patterns and there is a varying degree of
similarity and divergence among domains posing opportunities and challenges for
transfer learning especially when faced with limited or insufficient training
data. We propose a novel domain-aware contrastive knowledge transfer method
called DCMI to (1) identify the shared domain knowledge to encourage positive
transfer among similar domains (in particular from head domains to tail
domains); (2) isolate the domain-specific knowledge to minimize the negative
transfer from dissimilar domains. We evaluated the performance of DCMI on three
different datasets showing significant improvements in different MIL scenarios.
Related papers
- A Collaborative Transfer Learning Framework for Cross-domain
Recommendation [12.880177078884927]
In the recommendation systems, there are multiple business domains to meet the diverse interests and needs of users.
We propose the Collaborative Cross-Domain Transfer Learning Framework (CCTL) to overcome these challenges.
CCTL evaluates the information gain of the source domain on the target domain using a symmetric companion network.
arXiv Detail & Related papers (2023-06-26T09:43:58Z) - Meta-causal Learning for Single Domain Generalization [102.53303707563612]
Single domain generalization aims to learn a model from a single training domain (source domain) and apply it to multiple unseen test domains (target domains)
Existing methods focus on expanding the distribution of the training domain to cover the target domains, but without estimating the domain shift between the source and target domains.
We propose a new learning paradigm, namely simulate-analyze-reduce, which first simulates the domain shift by building an auxiliary domain as the target domain, then learns to analyze the causes of domain shift, and finally learns to reduce the domain shift for model adaptation.
arXiv Detail & Related papers (2023-04-07T15:46:38Z) - Aggregation of Disentanglement: Reconsidering Domain Variations in
Domain Generalization [9.577254317971933]
We argue that the domain variantions also contain useful information, ie, classification-aware information, for downstream tasks.
We propose a novel paradigm called Domain Disentanglement Network (DDN) to disentangle the domain expert features from the source domain images.
We also propound a new contrastive learning method to guide the domain expert features to form a more balanced and separable feature space.
arXiv Detail & Related papers (2023-02-05T09:48:57Z) - Improving Fake News Detection of Influential Domain via Domain- and
Instance-Level Transfer [16.886024206337257]
We propose a Domain- and Instance-level Transfer Framework for Fake News Detection (DITFEND)
DITFEND could improve the performance of specific target domains.
Online experiments show that it brings additional improvements over the base models in a real-world scenario.
arXiv Detail & Related papers (2022-09-19T10:21:13Z) - Multi-Scale Multi-Target Domain Adaptation for Angle Closure
Classification [50.658613573816254]
We propose a novel Multi-scale Multi-target Domain Adversarial Network (M2DAN) for angle closure classification.
Based on these domain-invariant features at different scales, the deep model trained on the source domain is able to classify angle closure on multiple target domains.
arXiv Detail & Related papers (2022-08-25T15:27:55Z) - Domain Generalization via Selective Consistency Regularization for Time
Series Classification [16.338176636365752]
Domain generalization methods aim to learn models robust to domain shift with data from a limited number of source domains.
We propose a novel representation learning methodology that selectively enforces prediction consistency between source domains.
arXiv Detail & Related papers (2022-06-16T01:57:35Z) - Domain Invariant Masked Autoencoders for Self-supervised Learning from
Multi-domains [73.54897096088149]
We propose a Domain-invariant Masked AutoEncoder (DiMAE) for self-supervised learning from multi-domains.
The core idea is to augment the input image with style noise from different domains and then reconstruct the image from the embedding of the augmented image.
Experiments on PACS and DomainNet illustrate that DiMAE achieves considerable gains compared with recent state-of-the-art methods.
arXiv Detail & Related papers (2022-05-10T09:49:40Z) - Label Distribution Learning for Generalizable Multi-source Person
Re-identification [48.77206888171507]
Person re-identification (Re-ID) is a critical technique in the video surveillance system.
It is difficult to directly apply the supervised model to arbitrary unseen domains.
We propose a novel label distribution learning (LDL) method to address the generalizable multi-source person Re-ID task.
arXiv Detail & Related papers (2022-04-12T15:59:10Z) - Open Domain Generalization with Domain-Augmented Meta-Learning [83.59952915761141]
We study a novel and practical problem of Open Domain Generalization (OpenDG)
We propose a Domain-Augmented Meta-Learning framework to learn open-domain generalizable representations.
Experiment results on various multi-domain datasets demonstrate that the proposed Domain-Augmented Meta-Learning (DAML) outperforms prior methods for unseen domain recognition.
arXiv Detail & Related papers (2021-04-08T09:12:24Z) - Domain2Vec: Domain Embedding for Unsupervised Domain Adaptation [56.94873619509414]
Conventional unsupervised domain adaptation studies the knowledge transfer between a limited number of domains.
We propose a novel Domain2Vec model to provide vectorial representations of visual domains based on joint learning of feature disentanglement and Gram matrix.
We demonstrate that our embedding is capable of predicting domain similarities that match our intuition about visual relations between different domains.
arXiv Detail & Related papers (2020-07-17T22:05:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.