RBC: Rectifying the Biased Context in Continual Semantic Segmentation
- URL: http://arxiv.org/abs/2203.08404v1
- Date: Wed, 16 Mar 2022 05:39:32 GMT
- Title: RBC: Rectifying the Biased Context in Continual Semantic Segmentation
- Authors: Hanbin Zhao, Fengyu Yang, Xinghe Fu, Xi Li
- Abstract summary: We propose a biased-context-rectified CSS framework with a context-rectified image-duplet learning scheme and a biased-context-insensitive consistency loss.
Our approach outperforms state-of-the-art methods by a large margin in existing CSS scenarios.
- Score: 10.935529209436929
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have witnessed a great development of Convolutional Neural
Networks in semantic segmentation, where all classes of training images are
simultaneously available. In practice, new images are usually made available in
a consecutive manner, leading to a problem called Continual Semantic
Segmentation (CSS). Typically, CSS faces the forgetting problem since previous
training images are unavailable, and the semantic shift problem of the
background class. Considering the semantic segmentation as a context-dependent
pixel-level classification task, we explore CSS from a new perspective of
context analysis in this paper. We observe that the context of old-class pixels
in the new images is much more biased on new classes than that in the old
images, which can sharply aggravate the old-class forgetting and new-class
overfitting. To tackle the obstacle, we propose a biased-context-rectified CSS
framework with a context-rectified image-duplet learning scheme and a
biased-context-insensitive consistency loss. Furthermore, we propose an
adaptive re-weighting class-balanced learning strategy for the biased class
distribution. Our approach outperforms state-of-the-art methods by a large
margin in existing CSS scenarios.
Related papers
- BACS: Background Aware Continual Semantic Segmentation [15.821935479975343]
In autonomous driving, there's a need to incorporate new classes as the operating environment of the deployed agent becomes more complex.
For enhanced annotation efficiency, ideally, only pixels belonging to new classes would be annotated.
This paper proposes a Backward Background Shift Detector (BACS) to detect previously observed classes.
arXiv Detail & Related papers (2024-04-19T19:25:26Z) - Tendency-driven Mutual Exclusivity for Weakly Supervised Incremental Semantic Segmentation [56.1776710527814]
Weakly Incremental Learning for Semantic (WILSS) leverages a pre-trained segmentation model to segment new classes using cost-effective and readily available image-level labels.
A prevailing way to solve WILSS is the generation of seed areas for each new class, serving as a form of pixel-level supervision.
We propose an innovative, tendency-driven relationship of mutual exclusivity, meticulously tailored to govern the behavior of the seed areas.
arXiv Detail & Related papers (2024-04-18T08:23:24Z) - DiffusePast: Diffusion-based Generative Replay for Class Incremental
Semantic Segmentation [73.54038780856554]
Class Incremental Semantic (CISS) extends the traditional segmentation task by incrementally learning newly added classes.
Previous work has introduced generative replay, which involves replaying old class samples generated from a pre-trained GAN.
We propose DiffusePast, a novel framework featuring a diffusion-based generative replay module that generates semantically accurate images with more reliable masks guided by different instructions.
arXiv Detail & Related papers (2023-08-02T13:13:18Z) - Learning Contrastive Representation for Semantic Correspondence [150.29135856909477]
We propose a multi-level contrastive learning approach for semantic matching.
We show that image-level contrastive learning is a key component to encourage the convolutional features to find correspondence between similar objects.
arXiv Detail & Related papers (2021-09-22T18:34:14Z) - Tackling Catastrophic Forgetting and Background Shift in Continual
Semantic Segmentation [35.2461834832935]
Continual learning for semantic segmentation (CSS) is an emerging trend that consists in updating an old model by sequentially adding new classes.
In this paper, we propose Local POD, a multi-scale pooling distillation scheme that preserves long- and short-range spatial relationships.
We also introduce a novel rehearsal method that is particularly suited for segmentation.
arXiv Detail & Related papers (2021-06-29T11:57:21Z) - Exploring Cross-Image Pixel Contrast for Semantic Segmentation [130.22216825377618]
We propose a pixel-wise contrastive framework for semantic segmentation in the fully supervised setting.
The core idea is to enforce pixel embeddings belonging to a same semantic class to be more similar than embeddings from different classes.
Our method can be effortlessly incorporated into existing segmentation frameworks without extra overhead during testing.
arXiv Detail & Related papers (2021-01-28T11:35:32Z) - A Few Guidelines for Incremental Few-Shot Segmentation [57.34237650765928]
Given a pretrained segmentation model and few images containing novel classes, our goal is to learn to segment novel classes while retaining the ability to segment previously seen ones.
We show how the main problems of end-to-end training in this scenario are.
i) the drift of the batch-normalization statistics toward novel classes that we can fix with batch renormalization and.
ii) the forgetting of old classes, that we can fix with regularization strategies.
arXiv Detail & Related papers (2020-11-30T20:45:56Z) - PLOP: Learning without Forgetting for Continual Semantic Segmentation [44.49799311137856]
Continual learning for semantic segmentation (CSS) is an emerging trend that consists in updating an old model by sequentially adding new classes.
In this paper, we propose Local POD, a multi-scale pooling distillation scheme that preserves long- and short-range spatial relationships at feature level.
We also design an entropy-based pseudo-labelling of the background w.r.t. classes predicted by the old model to deal with background shift and avoid catastrophic forgetting of the old classes.
arXiv Detail & Related papers (2020-11-23T13:35:03Z) - One-Shot Image Classification by Learning to Restore Prototypes [11.448423413463916]
One-shot image classification aims to train image classifiers over the dataset with only one image per category.
For one-shot learning, the existing metric learning approaches would suffer poor performance because the single training image may not be representative of the class.
We propose a simple yet effective regression model, denoted by RestoreNet, which learns a class transformation on the image feature to move the image closer to the class center in the feature space.
arXiv Detail & Related papers (2020-05-04T02:11:30Z) - Distilling Localization for Self-Supervised Representation Learning [82.79808902674282]
Contrastive learning has revolutionized unsupervised representation learning.
Current contrastive models are ineffective at localizing the foreground object.
We propose a data-driven approach for learning in variance to backgrounds.
arXiv Detail & Related papers (2020-04-14T16:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.