Consistency Regularization for Domain Adaptation
- URL: http://arxiv.org/abs/2208.11084v1
- Date: Tue, 23 Aug 2022 17:07:06 GMT
- Title: Consistency Regularization for Domain Adaptation
- Authors: Kian Boon Koh and Basura Fernando
- Abstract summary: Unsupervised domain adaptation (UDA) tries to solve this problem by studying how more accessible data can be used to train and adapt models to real world images.
Recent UDA methods applies self-learning by training on pixel-wise classification loss using a student and teacher network.
We propose the addition of a consistency regularization term to semi-supervised UDA by modelling the inter-pixel relationship between elements in networks' output.
- Score: 17.067348505083327
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Collection of real world annotations for training semantic segmentation
models is an expensive process. Unsupervised domain adaptation (UDA) tries to
solve this problem by studying how more accessible data such as synthetic data
can be used to train and adapt models to real world images without requiring
their annotations. Recent UDA methods applies self-learning by training on
pixel-wise classification loss using a student and teacher network. In this
paper, we propose the addition of a consistency regularization term to
semi-supervised UDA by modelling the inter-pixel relationship between elements
in networks' output. We demonstrate the effectiveness of the proposed
consistency regularization term by applying it to the state-of-the-art DAFormer
framework and improving mIoU19 performance on the GTA5 to Cityscapes benchmark
by 0.8 and mIou16 performance on the SYNTHIA to Cityscapes benchmark by 1.2.
Related papers
- Physically Feasible Semantic Segmentation [58.17907376475596]
State-of-the-art semantic segmentation models are typically optimized in a data-driven fashion.
Our method, Physically Feasible Semantic (PhyFea), extracts explicit physical constraints that govern spatial class relations.
PhyFea yields significant performance improvements in mIoU over each state-of-the-art network we use.
arXiv Detail & Related papers (2024-08-26T22:39:08Z) - ReCoRe: Regularized Contrastive Representation Learning of World Model [21.29132219042405]
We present a world model that learns invariant features using contrastive unsupervised learning and an intervention-invariant regularizer.
Our method outperforms current state-of-the-art model-based and model-free RL methods and significantly improves on out-of-distribution point navigation tasks evaluated on the iGibson benchmark.
arXiv Detail & Related papers (2023-12-14T15:53:07Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - IDA: Informed Domain Adaptive Semantic Segmentation [51.12107564372869]
We propose an Domain Informed Adaptation (IDA) model, a self-training framework that mixes the data based on class-level segmentation performance.
In our IDA model, the class-level performance is tracked by an expected confidence score (ECS) and we then use a dynamic schedule to determine the mixing ratio for data in different domains.
Our proposed method is able to outperform the state-of-the-art UDA-SS method by a margin of 1.1 mIoU in the adaptation of GTA-V to Cityscapes and of 0.9 mIoU in the adaptation of SYNTHIA to City
arXiv Detail & Related papers (2023-03-05T18:16:34Z) - One-Shot Domain Adaptive and Generalizable Semantic Segmentation with
Class-Aware Cross-Domain Transformers [96.51828911883456]
Unsupervised sim-to-real domain adaptation (UDA) for semantic segmentation aims to improve the real-world test performance of a model trained on simulated data.
Traditional UDA often assumes that there are abundant unlabeled real-world data samples available during training for the adaptation.
We explore the one-shot unsupervised sim-to-real domain adaptation (OSUDA) and generalization problem, where only one real-world data sample is available.
arXiv Detail & Related papers (2022-12-14T15:54:15Z) - Imposing Consistency for Optical Flow Estimation [73.53204596544472]
Imposing consistency through proxy tasks has been shown to enhance data-driven learning.
This paper introduces novel and effective consistency strategies for optical flow estimation.
arXiv Detail & Related papers (2022-04-14T22:58:30Z) - DAFormer: Improving Network Architectures and Training Strategies for
Domain-Adaptive Semantic Segmentation [99.88539409432916]
We study the unsupervised domain adaptation (UDA) process.
We propose a novel UDA method, DAFormer, based on the benchmark results.
DAFormer significantly improves the state-of-the-art performance by 10.8 mIoU for GTA->Cityscapes and 5.4 mIoU for Synthia->Cityscapes.
arXiv Detail & Related papers (2021-11-29T19:00:46Z) - Exploiting Image Translations via Ensemble Self-Supervised Learning for
Unsupervised Domain Adaptation [0.0]
We introduce an unsupervised domain adaption (UDA) strategy that combines multiple image translations, ensemble learning and self-supervised learning in one coherent approach.
We focus on one of the standard tasks of UDA in which a semantic segmentation model is trained on labeled synthetic data together with unlabeled real-world data.
arXiv Detail & Related papers (2021-07-13T16:43:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.