Discriminative Noise Robust Sparse Orthogonal Label Regression-based
Domain Adaptation
- URL: http://arxiv.org/abs/2101.04563v1
- Date: Sat, 9 Jan 2021 07:10:13 GMT
- Title: Discriminative Noise Robust Sparse Orthogonal Label Regression-based
Domain Adaptation
- Authors: Lingkun Luo, Liming Chen, Shiqiang Hu
- Abstract summary: Domain adaptation (DA) aims to enable a learning model trained from a source domain to generalize well on a target domain.
We propose a novel unsupervised DA method, namely Discriminative Noise Robust Sparse Orthogonal Label Regression-based Domain Adaptation.
- Score: 6.61544170496402
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Domain adaptation (DA) aims to enable a learning model trained from a source
domain to generalize well on a target domain, despite the mismatch of data
distributions between the two domains. State-of-the-art DA methods have so far
focused on the search of a latent shared feature space where source and target
domain data can be aligned either statistically and/or geometrically. In this
paper, we propose a novel unsupervised DA method, namely Discriminative Noise
Robust Sparse Orthogonal Label Regression-based Domain Adaptation (DOLL-DA).
The proposed DOLL-DA derives from a novel integrated model which searches a
shared feature subspace where source and target domain data are, through
optimization of some repulse force terms, discriminatively aligned
statistically, while at same time regresses orthogonally data labels thereof
using a label embedding trick. Furthermore, in minimizing a novel Noise Robust
Sparse Orthogonal Label Regression(NRS_OLR) term, the proposed model explicitly
accounts for data outliers to avoid negative transfer and introduces the
property of sparsity when regressing data labels.
Due to the character restriction. Please read our detailed abstract in our
paper.
Related papers
- Online Continual Domain Adaptation for Semantic Image Segmentation Using
Internal Representations [28.549418215123936]
We develop an online UDA algorithm for semantic segmentation of images that improves model generalization on unannotated domains.
We evaluate our approach on well established semantic segmentation datasets and demonstrate it compares favorably against state-of-the-art (SOTA) semantic segmentation methods.
arXiv Detail & Related papers (2024-01-02T04:48:49Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Robust Target Training for Multi-Source Domain Adaptation [110.77704026569499]
We propose a novel Bi-level Optimization based Robust Target Training (BORT$2$) method for MSDA.
Our proposed method achieves the state of the art performance on three MSDA benchmarks, including the large-scale DomainNet dataset.
arXiv Detail & Related papers (2022-10-04T15:20:01Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Instance Relation Graph Guided Source-Free Domain Adaptive Object
Detection [79.89082006155135]
Unsupervised Domain Adaptation (UDA) is an effective approach to tackle the issue of domain shift.
UDA methods try to align the source and target representations to improve the generalization on the target domain.
The Source-Free Adaptation Domain (SFDA) setting aims to alleviate these concerns by adapting a source-trained model for the target domain without requiring access to the source data.
arXiv Detail & Related papers (2022-03-29T17:50:43Z) - MetaCorrection: Domain-aware Meta Loss Correction for Unsupervised
Domain Adaptation in Semantic Segmentation [14.8840510432657]
Unsupervised domain adaptation (UDA) aims to transfer the knowledge from the labeled source domain to the unlabeled target domain.
Existing self-training based UDA approaches assign pseudo labels for target data and treat them as ground truth labels.
generated pseudo labels from the model optimized on the source domain inevitably contain noise due to the domain gap.
arXiv Detail & Related papers (2021-03-09T06:57:03Z) - Open-Set Hypothesis Transfer with Semantic Consistency [99.83813484934177]
We introduce a method that focuses on the semantic consistency under transformation of target data.
Our model first discovers confident predictions and performs classification with pseudo-labels.
As a result, unlabeled data can be classified into discriminative classes coincided with either source classes or unknown classes.
arXiv Detail & Related papers (2020-10-01T10:44:31Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z) - Enlarging Discriminative Power by Adding an Extra Class in Unsupervised
Domain Adaptation [5.377369521932011]
We propose an idea of empowering the discriminativeness: Adding a new, artificial class and training the model on the data together with the GAN-generated samples of the new class.
Our idea is highly generic so that it is compatible with many existing methods such as DANN, VADA, and DIRT-T.
arXiv Detail & Related papers (2020-02-19T07:58:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.