DACS: Domain Adaptation via Cross-domain Mixed Sampling
- URL: http://arxiv.org/abs/2007.08702v2
- Date: Sun, 29 Nov 2020 11:13:40 GMT
- Title: DACS: Domain Adaptation via Cross-domain Mixed Sampling
- Authors: Wilhelm Tranheden, Viktor Olsson, Juliano Pinto, Lennart Svensson
- Abstract summary: Unsupervised domain adaptation attempts to train on labelled data from one domain, and simultaneously learn from unlabelled data in the domain of interest.
We propose DACS: Domain Adaptation via Cross-domain mixed Sampling, which mixes images from the two domains along with the corresponding labels and pseudo-labels.
We demonstrate the effectiveness of our solution by achieving state-of-the-art results for GTA5 to Cityscapes.
- Score: 4.205692673448206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semantic segmentation models based on convolutional neural networks have
recently displayed remarkable performance for a multitude of applications.
However, these models typically do not generalize well when applied on new
domains, especially when going from synthetic to real data. In this paper we
address the problem of unsupervised domain adaptation (UDA), which attempts to
train on labelled data from one domain (source domain), and simultaneously
learn from unlabelled data in the domain of interest (target domain). Existing
methods have seen success by training on pseudo-labels for these unlabelled
images. Multiple techniques have been proposed to mitigate low-quality
pseudo-labels arising from the domain shift, with varying degrees of success.
We propose DACS: Domain Adaptation via Cross-domain mixed Sampling, which mixes
images from the two domains along with the corresponding labels and
pseudo-labels. These mixed samples are then trained on, in addition to the
labelled data itself. We demonstrate the effectiveness of our solution by
achieving state-of-the-art results for GTA5 to Cityscapes, a common
synthetic-to-real semantic segmentation benchmark for UDA.
Related papers
- AdaptDiff: Cross-Modality Domain Adaptation via Weak Conditional Semantic Diffusion for Retinal Vessel Segmentation [10.958821619282748]
We present an unsupervised domain adaptation (UDA) method named AdaptDiff.
It enables a retinal vessel segmentation network trained on fundus photography (FP) to produce satisfactory results on unseen modalities.
Our results demonstrate a significant improvement in segmentation performance across all unseen datasets.
arXiv Detail & Related papers (2024-10-06T23:04:29Z) - Constructing and Exploring Intermediate Domains in Mixed Domain Semi-supervised Medical Image Segmentation [36.45117307751509]
Both limited annotation and domain shift are prevalent challenges in medical image segmentation.
We introduce Mixed Domain Semi-supervised medical image components (MiDSS)
Our method achieves a notable 13.57% improvement in Dice score on Prostate dataset, as demonstrated on three public datasets.
arXiv Detail & Related papers (2024-04-13T10:15:51Z) - Inter-Domain Mixup for Semi-Supervised Domain Adaptation [108.40945109477886]
Semi-supervised domain adaptation (SSDA) aims to bridge source and target domain distributions, with a small number of target labels available.
Existing SSDA work fails to make full use of label information from both source and target domains for feature alignment across domains.
This paper presents a novel SSDA approach, Inter-domain Mixup with Neighborhood Expansion (IDMNE), to tackle this issue.
arXiv Detail & Related papers (2024-01-21T10:20:46Z) - Compositional Semantic Mix for Domain Adaptation in Point Cloud
Segmentation [65.78246406460305]
compositional semantic mixing represents the first unsupervised domain adaptation technique for point cloud segmentation.
We present a two-branch symmetric network architecture capable of concurrently processing point clouds from a source domain (e.g. synthetic) and point clouds from a target domain (e.g. real-world)
arXiv Detail & Related papers (2023-08-28T14:43:36Z) - Cross-Domain Few-Shot Classification via Inter-Source Stylization [11.008292768447614]
Cross-Domain Few-Shot Classification (CDFSC) is to accurately classify a target dataset with limited labelled data.
This paper proposes a solution that makes use of multiple source domains without the need for additional labeling costs.
arXiv Detail & Related papers (2022-08-17T01:44:32Z) - Dynamic Instance Domain Adaptation [109.53575039217094]
Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
arXiv Detail & Related papers (2022-03-09T20:05:54Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z) - Domain Generalization via Semi-supervised Meta Learning [7.722498348924133]
We propose the first method of domain generalization to leverage unlabeled samples.
It is trained by a meta learning approach to mimic the distribution shift between the input source domains and unseen target domains.
Experimental results on benchmark datasets indicate that DG outperforms state-of-the-art domain generalization and semi-supervised learning methods.
arXiv Detail & Related papers (2020-09-26T18:05:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.