Unsupervised Domain Adaptation in Semantic Segmentation: a Review
- URL: http://arxiv.org/abs/2005.10876v1
- Date: Thu, 21 May 2020 20:10:38 GMT
- Title: Unsupervised Domain Adaptation in Semantic Segmentation: a Review
- Authors: Marco Toldo, Andrea Maracani, Umberto Michieli and Pietro Zanuttigh
- Abstract summary: The aim of this paper is to give an overview of the recent advancements in the Unsupervised Domain Adaptation (UDA) of deep networks for semantic segmentation.
- Score: 22.366638308792734
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The aim of this paper is to give an overview of the recent advancements in
the Unsupervised Domain Adaptation (UDA) of deep networks for semantic
segmentation. This task is attracting a wide interest, since semantic
segmentation models require a huge amount of labeled data and the lack of data
fitting specific requirements is the main limitation in the deployment of these
techniques. This problem has been recently explored and has rapidly grown with
a large number of ad-hoc approaches. This motivates us to build a comprehensive
overview of the proposed methodologies and to provide a clear categorization.
In this paper, we start by introducing the problem, its formulation and the
various scenarios that can be considered. Then, we introduce the different
levels at which adaptation strategies may be applied: namely, at the input
(image) level, at the internal features representation and at the output level.
Furthermore, we present a detailed overview of the literature in the field,
dividing previous methods based on the following (non mutually exclusive)
categories: adversarial learning, generative-based, analysis of the classifier
discrepancies, self-teaching, entropy minimization, curriculum learning and
multi-task learning. Novel research directions are also briefly introduced to
give a hint of interesting open problems in the field. Finally, a comparison of
the performance of the various methods in the widely used autonomous driving
scenario is presented.
Related papers
- A Bottom-Up Approach to Class-Agnostic Image Segmentation [4.086366531569003]
We present a novel bottom-up formulation for addressing the class-agnostic segmentation problem.
We supervise our network directly on the projective sphere of its feature space.
Our bottom-up formulation exhibits exceptional generalization capability, even when trained on datasets designed for class-based segmentation.
arXiv Detail & Related papers (2024-09-20T17:56:02Z) - Visual Prompt Selection for In-Context Learning Segmentation [77.15684360470152]
In this paper, we focus on rethinking and improving the example selection strategy.
We first demonstrate that ICL-based segmentation models are sensitive to different contexts.
Furthermore, empirical evidence indicates that the diversity of contextual prompts plays a crucial role in guiding segmentation.
arXiv Detail & Related papers (2024-07-14T15:02:54Z) - Detecting Statements in Text: A Domain-Agnostic Few-Shot Solution [1.3654846342364308]
State-of-the-art approaches usually involve fine-tuning models on large annotated datasets, which are costly to produce.
We propose and release a qualitative and versatile few-shot learning methodology as a common paradigm for any claim-based textual classification task.
We illustrate this methodology in the context of three tasks: climate change contrarianism detection, topic/stance classification and depression-relates symptoms detection.
arXiv Detail & Related papers (2024-05-09T12:03:38Z) - NormAUG: Normalization-guided Augmentation for Domain Generalization [60.159546669021346]
We propose a simple yet effective method called NormAUG (Normalization-guided Augmentation) for deep learning.
Our method introduces diverse information at the feature level and improves the generalization of the main path.
In the test stage, we leverage an ensemble strategy to combine the predictions from the auxiliary path of our model, further boosting performance.
arXiv Detail & Related papers (2023-07-25T13:35:45Z) - Semantic Image Segmentation: Two Decades of Research [22.533249554532322]
This book is an effort to summarize two decades of research in the field of semantic image segmentation (SiS)
We propose a review of solutions starting from early historical methods followed by an overview of more recent deep learning methods including the latest trend of using transformers.
We unveil newer trends such as multi-domain learning, domain generalization, domain incremental learning, test-time adaptation and source-free domain adaptation.
arXiv Detail & Related papers (2023-02-13T14:11:05Z) - A Survey on Label-efficient Deep Segmentation: Bridging the Gap between
Weak Supervision and Dense Prediction [115.9169213834476]
This paper offers a comprehensive review on label-efficient segmentation methods.
We first develop a taxonomy to organize these methods according to the supervision provided by different types of weak labels.
Next, we summarize the existing label-efficient segmentation methods from a unified perspective.
arXiv Detail & Related papers (2022-07-04T06:21:01Z) - Unsupervised Domain Adaptation for Semantic Image Segmentation: a
Comprehensive Survey [24.622211579286127]
This survey is an effort to summarize five years of this incredibly rapidly growing field.
We present the most important semantic segmentation methods.
We unveil newer trends such as multi-domain learning, domain generalization, test-time adaptation or source-free domain adaptation.
arXiv Detail & Related papers (2021-12-06T18:47:41Z) - MCDAL: Maximum Classifier Discrepancy for Active Learning [74.73133545019877]
Recent state-of-the-art active learning methods have mostly leveraged Generative Adversarial Networks (GAN) for sample acquisition.
We propose in this paper a novel active learning framework that we call Maximum Discrepancy for Active Learning (MCDAL)
In particular, we utilize two auxiliary classification layers that learn tighter decision boundaries by maximizing the discrepancies among them.
arXiv Detail & Related papers (2021-07-23T06:57:08Z) - Multitask Learning for Class-Imbalanced Discourse Classification [74.41900374452472]
We show that a multitask approach can improve 7% Micro F1-score upon current state-of-the-art benchmarks.
We also offer a comparative review of additional techniques proposed to address resource-poor problems in NLP.
arXiv Detail & Related papers (2021-01-02T07:13:41Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.