Beyond Background Shift: Rethinking Instance Replay in Continual Semantic Segmentation
- URL: http://arxiv.org/abs/2503.22136v1
- Date: Fri, 28 Mar 2025 04:22:34 GMT
- Title: Beyond Background Shift: Rethinking Instance Replay in Continual Semantic Segmentation
- Authors: Hongmei Yin, Tingliang Feng, Fan Lyu, Fanhua Shang, Hongying Liu, Wei Feng, Liang Wan,
- Abstract summary: Continual semantic segmentation (CSS) networks are required to continuously learn new classes without erasing knowledge of previously learned ones.<n>Stored and new images with partial category annotations leads to confusion between unannotated categories and the background.<n>This paper proposes a novel Enhanced Instance Replay (EIR) method, which preserves knowledge of old classes while simultaneously eliminating background confusion.
- Score: 27.952611012675543
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we focus on continual semantic segmentation (CSS), where segmentation networks are required to continuously learn new classes without erasing knowledge of previously learned ones. Although storing images of old classes and directly incorporating them into the training of new models has proven effective in mitigating catastrophic forgetting in classification tasks, this strategy presents notable limitations in CSS. Specifically, the stored and new images with partial category annotations leads to confusion between unannotated categories and the background, complicating model fitting. To tackle this issue, this paper proposes a novel Enhanced Instance Replay (EIR) method, which not only preserves knowledge of old classes while simultaneously eliminating background confusion by instance storage of old classes, but also mitigates background shifts in the new images by integrating stored instances with new images. By effectively resolving background shifts in both stored and new images, EIR alleviates catastrophic forgetting in the CSS task, thereby enhancing the model's capacity for CSS. Experimental results validate the efficacy of our approach, which significantly outperforms state-of-the-art CSS methods.
Related papers
- Learning from the Web: Language Drives Weakly-Supervised Incremental Learning for Semantic Segmentation [33.955384040748946]
We argue that widely available web images can also be considered for the learning of new classes.
To our knowledge, this is the first work to rely solely on web images for both the learning of new concepts and the preservation of the already learned ones.
arXiv Detail & Related papers (2024-07-18T10:14:49Z) - DiffusePast: Diffusion-based Generative Replay for Class Incremental
Semantic Segmentation [73.54038780856554]
Class Incremental Semantic (CISS) extends the traditional segmentation task by incrementally learning newly added classes.
Previous work has introduced generative replay, which involves replaying old class samples generated from a pre-trained GAN.
We propose DiffusePast, a novel framework featuring a diffusion-based generative replay module that generates semantically accurate images with more reliable masks guided by different instructions.
arXiv Detail & Related papers (2023-08-02T13:13:18Z) - Mining Unseen Classes via Regional Objectness: A Simple Baseline for
Incremental Segmentation [57.80416375466496]
Incremental or continual learning has been extensively studied for image classification tasks to alleviate catastrophic forgetting.
We propose a simple yet effective method in this paper, named unseen Classes via Regional Objectness for Mining (MicroSeg)
Our MicroSeg is based on the assumption that background regions with strong objectness possibly belong to those concepts in the historical or future stages.
In this way, the distribution characterizes of old concepts in the feature space could be better perceived, relieving the catastrophic forgetting caused by the background shift accordingly.
arXiv Detail & Related papers (2022-11-13T10:06:17Z) - RBC: Rectifying the Biased Context in Continual Semantic Segmentation [10.935529209436929]
We propose a biased-context-rectified CSS framework with a context-rectified image-duplet learning scheme and a biased-context-insensitive consistency loss.
Our approach outperforms state-of-the-art methods by a large margin in existing CSS scenarios.
arXiv Detail & Related papers (2022-03-16T05:39:32Z) - Rectifying the Shortcut Learning of Background: Shared Object
Concentration for Few-Shot Image Recognition [101.59989523028264]
Few-Shot image classification aims to utilize pretrained knowledge learned from a large-scale dataset to tackle a series of downstream classification tasks.
We propose COSOC, a novel Few-Shot Learning framework, to automatically figure out foreground objects at both pretraining and evaluation stage.
arXiv Detail & Related papers (2021-07-16T07:46:41Z) - Tackling Catastrophic Forgetting and Background Shift in Continual
Semantic Segmentation [35.2461834832935]
Continual learning for semantic segmentation (CSS) is an emerging trend that consists in updating an old model by sequentially adding new classes.
In this paper, we propose Local POD, a multi-scale pooling distillation scheme that preserves long- and short-range spatial relationships.
We also introduce a novel rehearsal method that is particularly suited for segmentation.
arXiv Detail & Related papers (2021-06-29T11:57:21Z) - Half-Real Half-Fake Distillation for Class-Incremental Semantic
Segmentation [84.1985497426083]
convolutional neural networks are ill-equipped for incremental learning.
New classes are available but the initial training data is not retained.
We try to address this issue by "inverting" the trained segmentation network to synthesize input images starting from random noise.
arXiv Detail & Related papers (2021-04-02T03:47:16Z) - PLOP: Learning without Forgetting for Continual Semantic Segmentation [44.49799311137856]
Continual learning for semantic segmentation (CSS) is an emerging trend that consists in updating an old model by sequentially adding new classes.
In this paper, we propose Local POD, a multi-scale pooling distillation scheme that preserves long- and short-range spatial relationships at feature level.
We also design an entropy-based pseudo-labelling of the background w.r.t. classes predicted by the old model to deal with background shift and avoid catastrophic forgetting of the old classes.
arXiv Detail & Related papers (2020-11-23T13:35:03Z) - Distilling Localization for Self-Supervised Representation Learning [82.79808902674282]
Contrastive learning has revolutionized unsupervised representation learning.
Current contrastive models are ineffective at localizing the foreground object.
We propose a data-driven approach for learning in variance to backgrounds.
arXiv Detail & Related papers (2020-04-14T16:29:42Z) - Memory-Efficient Incremental Learning Through Feature Adaptation [71.1449769528535]
We introduce an approach for incremental learning that preserves feature descriptors of training images from previously learned classes.
Keeping the much lower-dimensional feature embeddings of images reduces the memory footprint significantly.
Experimental results show that our method achieves state-of-the-art classification accuracy in incremental learning benchmarks.
arXiv Detail & Related papers (2020-04-01T21:16:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.