Improving Replay-Based Continual Semantic Segmentation with Smart Data
Selection
- URL: http://arxiv.org/abs/2209.09839v1
- Date: Tue, 20 Sep 2022 16:32:06 GMT
- Title: Improving Replay-Based Continual Semantic Segmentation with Smart Data
Selection
- Authors: Tobias Kalb, Bj\"orn Mauthe, J\"urgen Beyerer
- Abstract summary: We investigate the influences of various replay strategies for semantic segmentation and evaluate them in class- and domain-incremental settings.
Our findings suggest that in a class-incremental setting, it is critical to achieve a uniform distribution for the different classes in the buffer.
In the domain-incremental setting, it is most effective to select buffer samples by uniformly sampling from the distribution of learned feature representations or by choosing samples with median entropy.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continual learning for Semantic Segmentation (CSS) is a rapidly emerging
field, in which the capabilities of the segmentation model are incrementally
improved by learning new classes or new domains. A central challenge in
Continual Learning is overcoming the effects of catastrophic forgetting, which
refers to the sudden drop in accuracy on previously learned tasks after the
model is trained on new classes or domains. In continual classification this
challenge is often overcome by replaying a small selection of samples from
previous tasks, however replay is rarely considered in CSS. Therefore, we
investigate the influences of various replay strategies for semantic
segmentation and evaluate them in class- and domain-incremental settings. Our
findings suggest that in a class-incremental setting, it is critical to achieve
a uniform distribution for the different classes in the buffer to avoid a bias
towards newly learned classes. In the domain-incremental setting, it is most
effective to select buffer samples by uniformly sampling from the distribution
of learned feature representations or by choosing samples with median entropy.
Finally, we observe that the effective sampling methods help to decrease the
representation shift significantly in early layers, which is a major cause of
forgetting in domain-incremental learning.
Related papers
- Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Multivariate Prototype Representation for Domain-Generalized Incremental
Learning [35.83706574551515]
We design a DGCIL approach that remembers old classes, adapts to new classes, and can classify reliably objects from unseen domains.
Our loss formulation maintains classification boundaries and suppresses the domain-specific information of each class.
arXiv Detail & Related papers (2023-09-24T06:42:04Z) - Mitigating Catastrophic Forgetting in Task-Incremental Continual
Learning with Adaptive Classification Criterion [50.03041373044267]
We propose a Supervised Contrastive learning framework with adaptive classification criterion for Continual Learning.
Experiments show that CFL achieves state-of-the-art performance and has a stronger ability to overcome compared with the classification baselines.
arXiv Detail & Related papers (2023-05-20T19:22:40Z) - Intra-class Adaptive Augmentation with Neighbor Correction for Deep
Metric Learning [99.14132861655223]
We propose a novel intra-class adaptive augmentation (IAA) framework for deep metric learning.
We reasonably estimate intra-class variations for every class and generate adaptive synthetic samples to support hard samples mining.
Our method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%.
arXiv Detail & Related papers (2022-11-29T14:52:38Z) - Forget Less, Count Better: A Domain-Incremental Self-Distillation
Learning Benchmark for Lifelong Crowd Counting [51.44987756859706]
Off-the-shelf methods have some drawbacks to handle multiple domains.
Lifelong Crowd Counting aims at alleviating the catastrophic forgetting and improving the generalization ability.
arXiv Detail & Related papers (2022-05-06T15:37:56Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Reducing Representation Drift in Online Continual Learning [87.71558506591937]
We study the online continual learning paradigm, where agents must learn from a changing distribution with constrained memory and compute.
In this work we instead focus on the change in representations of previously observed data due to the introduction of previously unobserved class samples in the incoming data stream.
arXiv Detail & Related papers (2021-04-11T15:19:30Z) - Analyzing Overfitting under Class Imbalance in Neural Networks for Image
Segmentation [19.259574003403998]
In image segmentation neural networks may overfit to the foreground samples from small structures.
In this study, we provide new insights on the problem of overfitting under class imbalance by inspecting the network behavior.
arXiv Detail & Related papers (2021-02-20T14:57:58Z) - On the Exploration of Incremental Learning for Fine-grained Image
Retrieval [45.48333682748607]
We consider the problem of fine-grained image retrieval in an incremental setting, when new categories are added over time.
We propose an incremental learning method to mitigate retrieval performance degradation caused by the forgetting issue.
Our method effectively mitigates the catastrophic forgetting on the original classes while achieving high performance on the new classes.
arXiv Detail & Related papers (2020-10-15T21:07:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.