Semi-Supervised Domain Adaptation with Auto-Encoder via Simultaneous
Learning
- URL: http://arxiv.org/abs/2210.09486v1
- Date: Tue, 18 Oct 2022 00:10:11 GMT
- Title: Semi-Supervised Domain Adaptation with Auto-Encoder via Simultaneous
Learning
- Authors: Md Mahmudur Rahman, Rameswar Panda, Mohammad Arif Ul Alam
- Abstract summary: We present a new semi-supervised domain adaptation framework that combines a novel auto-encoder-based domain adaptation model with a simultaneous learning scheme.
Our framework holds strong distribution matching property by training both source and target auto-encoders.
- Score: 18.601226898819476
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We present a new semi-supervised domain adaptation framework that combines a
novel auto-encoder-based domain adaptation model with a simultaneous learning
scheme providing stable improvements over state-of-the-art domain adaptation
models. Our framework holds strong distribution matching property by training
both source and target auto-encoders using a novel simultaneous learning scheme
on a single graph with an optimally modified MMD loss objective function.
Additionally, we design a semi-supervised classification approach by
transferring the aligned domain invariant feature spaces from source domain to
the target domain. We evaluate on three datasets and show proof that our
framework can effectively solve both fragile convergence (adversarial) and weak
distribution matching problems between source and target feature space
(discrepancy) with a high `speed' of adaptation requiring a very low number of
iterations.
Related papers
- Contrastive Adversarial Training for Unsupervised Domain Adaptation [2.432037584128226]
Domain adversarial training has been successfully adopted for various domain adaptation tasks.
Large models make adversarial training being easily biased towards source domain and hardly adapted to target domain.
We propose contrastive adversarial training (CAT) approach that leverages the labeled source domain samples to reinforce and regulate the feature generation for target domain.
arXiv Detail & Related papers (2024-07-17T17:59:21Z) - Increasing Model Generalizability for Unsupervised Domain Adaptation [12.013345715187285]
We show that increasing the interclass margins in the embedding space can help to develop a UDA algorithm with improved performance.
We demonstrate that using our approach leads to improved model generalizability on four standard benchmark UDA image classification datasets.
arXiv Detail & Related papers (2022-09-29T09:08:04Z) - Feed-Forward Latent Domain Adaptation [17.71179872529747]
We study a new highly-practical problem setting that enables resource-constrained edge devices to adapt a pre-trained model to their local data distributions.
Considering limitations of edge devices, we aim to only use a pre-trained model and adapt it in a feed-forward way, without using back-propagation and without access to the source data.
Our solution is to meta-learn a network capable of embedding the mixed-relevance target dataset and dynamically adapting inference for target examples using cross-attention.
arXiv Detail & Related papers (2022-07-15T17:37:42Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Gradual Domain Adaptation via Self-Training of Auxiliary Models [50.63206102072175]
Domain adaptation becomes more challenging with increasing gaps between source and target domains.
We propose self-training of auxiliary models (AuxSelfTrain) that learns models for intermediate domains.
Experiments on benchmark datasets of unsupervised and semi-supervised domain adaptation verify its efficacy.
arXiv Detail & Related papers (2021-06-18T03:15:25Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Contradistinguisher: A Vapnik's Imperative to Unsupervised Domain
Adaptation [7.538482310185133]
We propose a model referred Contradistinguisher that learns contrastive features and whose objective is to jointly learn to contradistinguish the unlabeled target domain in an unsupervised way.
We achieve the state-of-the-art on Office-31 and VisDA-2017 datasets in both single-source and multi-source settings.
arXiv Detail & Related papers (2020-05-25T19:54:38Z) - Domain Adaptation by Class Centroid Matching and Local Manifold
Self-Learning [8.316259570013813]
We propose a novel domain adaptation approach, which can thoroughly explore the data distribution structure of target domain.
We regard the samples within the same cluster in target domain as a whole rather than individuals and assigns pseudo-labels to the target cluster by class centroid matching.
An efficient iterative optimization algorithm is designed to solve the objective function of our proposal with theoretical convergence guarantee.
arXiv Detail & Related papers (2020-03-20T16:59:27Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z) - Contradictory Structure Learning for Semi-supervised Domain Adaptation [67.89665267469053]
Current adversarial adaptation methods attempt to align the cross-domain features.
Two challenges remain unsolved: 1) the conditional distribution mismatch and 2) the bias of the decision boundary towards the source domain.
We propose a novel framework for semi-supervised domain adaptation by unifying the learning of opposite structures.
arXiv Detail & Related papers (2020-02-06T22:58:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.