Target Domain Data induces Negative Transfer in Mixed Domain Training
with Disjoint Classes
- URL: http://arxiv.org/abs/2303.01003v1
- Date: Thu, 2 Mar 2023 06:44:21 GMT
- Title: Target Domain Data induces Negative Transfer in Mixed Domain Training
with Disjoint Classes
- Authors: Eryk Banatt, Vickram Rajendran, Liam Packer
- Abstract summary: In practical scenarios, it is often the case that the available training data within the target domain only exist for a limited number of classes.
We show that including the target domain in training when there exist disjoint classes between the target and surrogate domains creates significant negative transfer.
- Score: 1.933681537640272
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In practical scenarios, it is often the case that the available training data
within the target domain only exist for a limited number of classes, with the
remaining classes only available within surrogate domains. We show that
including the target domain in training when there exist disjoint classes
between the target and surrogate domains creates significant negative transfer,
and causes performance to significantly decrease compared to training without
the target domain at all. We hypothesize that this negative transfer is due to
an intermediate shortcut that only occurs when multiple source domains are
present, and provide experimental evidence that this may be the case. We show
that this phenomena occurs on over 25 distinct domain shifts, both synthetic
and real, and in many cases deteriorates the performance to well worse than
random, even when using state-of-the-art domain adaptation methods.
Related papers
- Multi-modal Instance Refinement for Cross-domain Action Recognition [25.734898762987083]
Unsupervised cross-domain action recognition aims at adapting the model trained on an existing labeled source domain to a new unlabeled target domain.
We propose a Multi-modal Instance Refinement (MMIR) method to alleviate the negative transfer based on reinforcement learning.
Our method finally outperforms several other state-of-the-art baselines in cross-domain action recognition on the benchmark EPIC-Kitchens dataset.
arXiv Detail & Related papers (2023-11-24T05:06:28Z) - Cross-domain Transfer of defect features in technical domains based on
partial target data [0.0]
In many technical domains, it is only the defect or worn reject classes that are insufficiently represented.
The proposed classification approach addresses such conditions and is based on a CNN encoder.
It is benchmarked in a technical and a non-technical domain and shows convincing classification results.
arXiv Detail & Related papers (2022-11-24T15:23:58Z) - MemSAC: Memory Augmented Sample Consistency for Large Scale Unsupervised
Domain Adaptation [71.4942277262067]
We propose MemSAC, which exploits sample level similarity across source and target domains to achieve discriminative transfer.
We provide in-depth analysis and insights into the effectiveness of MemSAC.
arXiv Detail & Related papers (2022-07-25T17:55:28Z) - Domain Generalization via Selective Consistency Regularization for Time
Series Classification [16.338176636365752]
Domain generalization methods aim to learn models robust to domain shift with data from a limited number of source domains.
We propose a novel representation learning methodology that selectively enforces prediction consistency between source domains.
arXiv Detail & Related papers (2022-06-16T01:57:35Z) - From Big to Small: Adaptive Learning to Partial-Set Domains [94.92635970450578]
Domain adaptation targets at knowledge acquisition and dissemination from a labeled source domain to an unlabeled target domain under distribution shift.
Recent advances show that deep pre-trained models of large scale endow rich knowledge to tackle diverse downstream tasks of small scale.
This paper introduces Partial Domain Adaptation (PDA), a learning paradigm that relaxes the identical class space assumption to that the source class space subsumes the target class space.
arXiv Detail & Related papers (2022-03-14T07:02:45Z) - Cost-effective Framework for Gradual Domain Adaptation with
Multifidelity [3.6042575355093907]
In domain adaptation, when there is a large distance between the source and target domains, the prediction performance will degrade.
We propose a framework that combines multifidelity and active domain adaptation.
The effectiveness of the proposed method is evaluated by experiments with both artificial and real-world datasets.
arXiv Detail & Related papers (2022-02-09T09:44:39Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Physically-Constrained Transfer Learning through Shared Abundance Space
for Hyperspectral Image Classification [14.840925517957258]
We propose a new transfer learning scheme to bridge the gap between the source and target domains.
The proposed method is referred to as physically-constrained transfer learning through shared abundance space.
arXiv Detail & Related papers (2020-08-19T17:41:37Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z) - Differential Treatment for Stuff and Things: A Simple Unsupervised
Domain Adaptation Method for Semantic Segmentation [105.96860932833759]
State-of-the-art approaches prove that performing semantic-level alignment is helpful in tackling the domain shift issue.
We propose to improve the semantic-level alignment with different strategies for stuff regions and for things.
In addition to our proposed method, we show that our method can help ease this issue by minimizing the most similar stuff and instance features between the source and the target domains.
arXiv Detail & Related papers (2020-03-18T04:43:25Z) - A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation [142.31610972922067]
This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain.
We build on domain adversarial learning and propose a novel domain adaptation method BA$3$US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS)
Experimental results on multiple benchmarks demonstrate our BA$3$US surpasses state-of-the-arts for partial domain adaptation tasks.
arXiv Detail & Related papers (2020-03-05T11:37:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.