Gradual Source Domain Expansion for Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2311.09599v1
- Date: Thu, 16 Nov 2023 06:18:35 GMT
- Title: Gradual Source Domain Expansion for Unsupervised Domain Adaptation
- Authors: Thomas Westfechtel, Hao-Wei Yeh, Dexuan Zhang, Tatsuya Harada
- Abstract summary: Unsupervised domain adaptation (UDA) tries to overcome the need for a large labeled dataset by transferring knowledge from a source dataset to a target dataset.
We propose a gradual source domain expansion (GSDE) algorithm to overcome this problem.
GSDE trains the UDA task several times from scratch, each time reinitializing the network weights, but each time expands the source dataset with target data.
- Score: 45.207132297204424
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised domain adaptation (UDA) tries to overcome the need for a large
labeled dataset by transferring knowledge from a source dataset, with lots of
labeled data, to a target dataset, that has no labeled data. Since there are no
labels in the target domain, early misalignment might propagate into the later
stages and lead to an error build-up. In order to overcome this problem, we
propose a gradual source domain expansion (GSDE) algorithm. GSDE trains the UDA
task several times from scratch, each time reinitializing the network weights,
but each time expands the source dataset with target data. In particular, the
highest-scoring target data of the previous run are employed as pseudo-source
samples with their respective pseudo-label. Using this strategy, the
pseudo-source samples induce knowledge extracted from the previous run directly
from the start of the new training. This helps align the two domains better,
especially in the early training epochs. In this study, we first introduce a
strong baseline network and apply our GSDE strategy to it. We conduct
experiments and ablation studies on three benchmarks (Office-31, OfficeHome,
and DomainNet) and outperform state-of-the-art methods. We further show that
the proposed GSDE strategy can improve the accuracy of a variety of different
state-of-the-art UDA approaches.
Related papers
- IT-RUDA: Information Theory Assisted Robust Unsupervised Domain
Adaptation [7.225445443960775]
Distribution shift between train (source) and test (target) datasets is a common problem encountered in machine learning applications.
UDA technique carries out knowledge transfer from a label-rich source domain to an unlabeled target domain.
Outliers that exist in either source or target datasets can introduce additional challenges when using UDA in practice.
arXiv Detail & Related papers (2022-10-24T04:33:52Z) - Deep Unsupervised Domain Adaptation: A Review of Recent Advances and
Perspectives [16.68091981866261]
Unsupervised domain adaptation (UDA) is proposed to counter the performance drop on data in a target domain.
UDA has yielded promising results on natural image processing, video analysis, natural language processing, time-series data analysis, medical image analysis, etc.
arXiv Detail & Related papers (2022-08-15T20:05:07Z) - Domain Alignment Meets Fully Test-Time Adaptation [24.546705919244936]
A foundational requirement of a deployed ML model is to generalize to data drawn from a testing distribution that is different from training.
In this paper, we focus on a challenging variant of this problem, where access to the original source data is restricted.
We propose a new approach, CATTAn, that bridges UDA and FTTA, by relaxing the need to access entire source data.
arXiv Detail & Related papers (2022-07-09T03:17:19Z) - ProCST: Boosting Semantic Segmentation using Progressive Cyclic
Style-Transfer [38.03127458140549]
We propose a novel two-stage framework for improving domain adaptation techniques.
In the first step, we progressively train a multi-scale neural network to perform an initial transfer between the source data to the target data.
This new data has a reduced domain gap from the desired target domain, and the applied UDA approach further closes the gap.
arXiv Detail & Related papers (2022-04-25T18:01:05Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Target and Task specific Source-Free Domain Adaptive Image Segmentation [73.78898054277538]
We propose a two-stage approach for source-free domain adaptive image segmentation.
We focus on generating target-specific pseudo labels while suppressing high entropy regions.
In the second stage, we focus on adapting the network for task-specific representation.
arXiv Detail & Related papers (2022-03-29T17:50:22Z) - UMAD: Universal Model Adaptation under Domain and Category Shift [138.12678159620248]
Universal Model ADaptation (UMAD) framework handles both UDA scenarios without access to source data.
We develop an informative consistency score to help distinguish unknown samples from known samples.
Experiments on open-set and open-partial-set UDA scenarios demonstrate that UMAD exhibits comparable, if not superior, performance to state-of-the-art data-dependent methods.
arXiv Detail & Related papers (2021-12-16T01:22:59Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Multi-Source Deep Domain Adaptation with Weak Supervision for
Time-Series Sensor Data [31.43183992755392]
We propose a novel Convolutional deep Domain Adaptation model for Time Series data (CoDATS)
Second, we propose a novel Domain Adaptation with Weak Supervision (DA-WS) method by utilizing weak supervision in the form of target-domain label distributions.
Third, we perform comprehensive experiments on diverse real-world datasets to evaluate the effectiveness of our domain adaptation and weak supervision methods.
arXiv Detail & Related papers (2020-05-22T04:16:58Z) - Deep Domain-Adversarial Image Generation for Domain Generalisation [115.21519842245752]
Machine learning models typically suffer from the domain shift problem when trained on a source dataset and evaluated on a target dataset of different distribution.
To overcome this problem, domain generalisation (DG) methods aim to leverage data from multiple source domains so that a trained model can generalise to unseen domains.
We propose a novel DG approach based on emphDeep Domain-Adversarial Image Generation (DDAIG)
arXiv Detail & Related papers (2020-03-12T23:17:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.