Principles of Forgetting in Domain-Incremental Semantic Segmentation in
Adverse Weather Conditions
- URL: http://arxiv.org/abs/2303.14115v2
- Date: Mon, 12 Jun 2023 11:54:25 GMT
- Title: Principles of Forgetting in Domain-Incremental Semantic Segmentation in
Adverse Weather Conditions
- Authors: Tobias Kalb, J\"urgen Beyerer
- Abstract summary: Adverse weather conditions can significantly decrease model performance when such data are not available during training.
We study how the representations of semantic segmentation models are affected during domain-incremental learning in adverse weather conditions.
Our experiments and representational analyses indicate that catastrophic forgetting is primarily caused by changes to low-level features in domain-incremental learning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks for scene perception in automated vehicles achieve
excellent results for the domains they were trained on. However, in real-world
conditions, the domain of operation and its underlying data distribution are
subject to change. Adverse weather conditions, in particular, can significantly
decrease model performance when such data are not available during
training.Additionally, when a model is incrementally adapted to a new domain,
it suffers from catastrophic forgetting, causing a significant drop in
performance on previously observed domains. Despite recent progress in reducing
catastrophic forgetting, its causes and effects remain obscure. Therefore, we
study how the representations of semantic segmentation models are affected
during domain-incremental learning in adverse weather conditions. Our
experiments and representational analyses indicate that catastrophic forgetting
is primarily caused by changes to low-level features in domain-incremental
learning and that learning more general features on the source domain using
pre-training and image augmentations leads to efficient feature reuse in
subsequent tasks, which drastically reduces catastrophic forgetting. These
findings highlight the importance of methods that facilitate generalized
features for effective continual learning algorithms.
Related papers
- Domain-incremental Cardiac Image Segmentation with Style-oriented Replay
and Domain-sensitive Feature Whitening [67.6394526631557]
M&Ms should incrementally learn from each incoming dataset and progressively update with improved functionality as time goes by.
In medical scenarios, this is particularly challenging as accessing or storing past data is commonly not allowed due to data privacy.
We propose a novel domain-incremental learning framework to recover past domain inputs first and then regularly replay them during model optimization.
arXiv Detail & Related papers (2022-11-09T13:07:36Z) - Forget Less, Count Better: A Domain-Incremental Self-Distillation
Learning Benchmark for Lifelong Crowd Counting [51.44987756859706]
Off-the-shelf methods have some drawbacks to handle multiple domains.
Lifelong Crowd Counting aims at alleviating the catastrophic forgetting and improving the generalization ability.
arXiv Detail & Related papers (2022-05-06T15:37:56Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - Gradient Regularized Contrastive Learning for Continual Domain
Adaptation [86.02012896014095]
We study the problem of continual domain adaptation, where the model is presented with a labelled source domain and a sequence of unlabelled target domains.
We propose Gradient Regularized Contrastive Learning (GRCL) to solve the obstacles.
Experiments on Digits, DomainNet and Office-Caltech benchmarks demonstrate the strong performance of our approach.
arXiv Detail & Related papers (2021-03-23T04:10:42Z) - Learning a Domain-Agnostic Visual Representation for Autonomous Driving
via Contrastive Loss [25.798361683744684]
Domain-Agnostic Contrastive Learning (DACL) is a two-stage unsupervised domain adaptation framework with cyclic adversarial training and contrastive loss.
Our proposed approach achieves better performance in the monocular depth estimation task compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-10T07:06:03Z) - Studying Catastrophic Forgetting in Neural Ranking Models [3.8596788671326947]
We study in what extent neural ranking models catastrophically forget old knowledge acquired from previously observed domains after acquiring new knowledge.
Our experiments show that the effectiveness of neuralIR ranking models is achieved at the cost of catastrophic forgetting.
We believe that the obtained results can be useful for both theoretical and practical future work in neural IR.
arXiv Detail & Related papers (2021-01-18T10:42:57Z) - Domain Adaptation on Semantic Segmentation for Aerial Images [3.946367634483361]
We propose a novel unsupervised domain adaptation framework to address domain shift in semantic image segmentation.
We also apply entropy minimization on the target domain to produce high-confident prediction.
We show improvement over state-of-the-art methods in terms of various metrics.
arXiv Detail & Related papers (2020-12-03T20:58:27Z) - Domain Adaptation of Learned Features for Visual Localization [60.6817896667435]
We tackle the problem of visual localization under changing conditions, such as time of day, weather, and seasons.
Recent learned local features based on deep neural networks have shown superior performance over classical hand-crafted local features.
We present a novel and practical approach, where only a few examples are needed to reduce the domain gap.
arXiv Detail & Related papers (2020-08-21T05:17:32Z) - Gradient Regularized Contrastive Learning for Continual Domain
Adaptation [26.21464286134764]
We study the problem of continual domain adaptation, where the model is presented with a labeled source domain and a sequence of unlabeled target domains.
In this work, we propose Gradient Regularized Contrastive Learning to solve the above obstacles.
Our method can jointly learn both semantically discriminative and domain-invariant features with labeled source domain and unlabeled target domains.
arXiv Detail & Related papers (2020-07-25T14:30:03Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.