Survey on Unsupervised Domain Adaptation for Semantic Segmentation for
Visual Perception in Automated Driving
- URL: http://arxiv.org/abs/2304.11928v1
- Date: Mon, 24 Apr 2023 09:13:23 GMT
- Title: Survey on Unsupervised Domain Adaptation for Semantic Segmentation for
Visual Perception in Automated Driving
- Authors: Manuel Schwonberg, Joshua Niemeijer, Jan-Aike Term\"ohlen, J\"org P.
Sch\"afer, Nico M. Schmidt, Hanno Gottschalk, Tim Fingscheidt
- Abstract summary: Deep neural networks (DNNs) have proven their capabilities in many areas in the past years, such as robotics, or automated driving.
Despite this progress and tremendous research efforts, several issues still need to be addressed that limit the applicability of DNNs in automated driving.
The bad generalization of DNNs to new, unseen domains is a major problem on the way to a safe, large-scale application.
- Score: 23.4168567262989
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep neural networks (DNNs) have proven their capabilities in many areas in
the past years, such as robotics, or automated driving, enabling technological
breakthroughs. DNNs play a significant role in environment perception for the
challenging application of automated driving and are employed for tasks such as
detection, semantic segmentation, and sensor fusion. Despite this progress and
tremendous research efforts, several issues still need to be addressed that
limit the applicability of DNNs in automated driving. The bad generalization of
DNNs to new, unseen domains is a major problem on the way to a safe,
large-scale application, because manual annotation of new domains is costly,
particularly for semantic segmentation. For this reason, methods are required
to adapt DNNs to new domains without labeling effort. The task, which these
methods aim to solve is termed unsupervised domain adaptation (UDA). While
several different domain shifts can challenge DNNs, the shift between synthetic
and real data is of particular importance for automated driving, as it allows
the use of simulation environments for DNN training. In this work, we present
an overview of the current state of the art in this field of research. We
categorize and explain the different approaches for UDA. The number of
considered publications is larger than any other survey on this topic. The
scope of this survey goes far beyond the description of the UDA
state-of-the-art. Based on our large data and knowledge base, we present a
quantitative comparison of the approaches and use the observations to point out
the latest trends in this field. In the following, we conduct a critical
analysis of the state-of-the-art and highlight promising future research
directions. With this survey, we aim to facilitate UDA research further and
encourage scientists to exploit novel research directions to generalize DNNs
better.
Related papers
- Verifying the Generalization of Deep Learning to Out-of-Distribution Domains [1.5774380628229037]
Deep neural networks (DNNs) play a crucial role in the field of machine learning.
DNNs may occasionally exhibit challenges with generalization, i.e., may fail to handle inputs that were not encountered during training.
This limitation is a significant challenge when it comes to deploying deep learning for safety-critical tasks.
arXiv Detail & Related papers (2024-06-04T07:02:59Z) - Survey: Exploiting Data Redundancy for Optimization of Deep Learning [42.1585031880029]
Data redundancy is ubiquitous in the inputs and intermediate results of Deep Neural Networks (DNN)
This article surveys hundreds of recent papers on the topic.
It introduces a novel taxonomy to put the various techniques into a single categorization framework.
arXiv Detail & Related papers (2022-08-29T04:31:18Z) - Domain Generalization: A Survey [146.68420112164577]
Domain generalization (DG) aims to achieve OOD generalization by only using source domain data for model learning.
For the first time, a comprehensive literature review is provided to summarize the ten-year development in DG.
arXiv Detail & Related papers (2021-03-03T16:12:22Z) - Neuron Coverage-Guided Domain Generalization [37.77033512313927]
This paper focuses on the domain generalization task where domain knowledge is unavailable, and even worse, only samples from a single domain can be utilized during training.
Our motivation originates from the recent progresses in deep neural network (DNN) testing, which has shown that maximizing neuron coverage of DNN can help to explore possible defects of DNN.
arXiv Detail & Related papers (2021-02-27T14:26:53Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - A Survey on Assessing the Generalization Envelope of Deep Neural
Networks: Predictive Uncertainty, Out-of-distribution and Adversarial Samples [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art performance on numerous applications.
It is difficult to tell beforehand if a DNN receiving an input will deliver the correct output since their decision criteria are usually nontransparent.
This survey connects the three fields within the larger framework of investigating the generalization performance of machine learning methods and in particular DNNs.
arXiv Detail & Related papers (2020-08-21T09:12:52Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Neuroevolution in Deep Neural Networks: Current Trends and Future
Challenges [0.0]
A variety of methods have been applied to the architectural configuration and learning or training of artificial deep neural networks (DNN)
Evolutionary Algorithms (EAs) are gaining momentum as a computationally feasible method for the automated optimisation and training of DNNs.
This paper presents a comprehensive survey, discussion and evaluation of the state-of-the-art works on using EAs for architectural configuration and training of DNNs.
arXiv Detail & Related papers (2020-06-09T17:28:25Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z) - Multi-source Domain Adaptation in the Deep Learning Era: A Systematic
Survey [53.656086832255944]
Multi-source domain adaptation (MDA) is a powerful extension in which the labeled data may be collected from multiple sources.
MDA has attracted increasing attention in both academia and industry.
arXiv Detail & Related papers (2020-02-26T08:07:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.