CoMFormer: Continual Learning in Semantic and Panoptic Segmentation
- URL: http://arxiv.org/abs/2211.13999v1
- Date: Fri, 25 Nov 2022 10:15:06 GMT
- Title: CoMFormer: Continual Learning in Semantic and Panoptic Segmentation
- Authors: Fabio Cermelli, Matthieu Cord, Arthur Douillard
- Abstract summary: We present the first continual learning model capable of operating on both semantic and panoptic segmentation.
Our method carefully exploits the properties of transformer architectures to learn new classes over time.
Our CoMFormer outperforms all the existing baselines by forgetting less old classes but also learning more effectively new classes.
- Score: 45.66711231393775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continual learning for segmentation has recently seen increasing interest.
However, all previous works focus on narrow semantic segmentation and disregard
panoptic segmentation, an important task with real-world impacts. %a In this
paper, we present the first continual learning model capable of operating on
both semantic and panoptic segmentation. Inspired by recent transformer
approaches that consider segmentation as a mask-classification problem, we
design CoMFormer. Our method carefully exploits the properties of transformer
architectures to learn new classes over time. Specifically, we propose a novel
adaptive distillation loss along with a mask-based pseudo-labeling technique to
effectively prevent forgetting. To evaluate our approach, we introduce a novel
continual panoptic segmentation benchmark on the challenging ADE20K dataset.
Our CoMFormer outperforms all the existing baselines by forgetting less old
classes but also learning more effectively new classes. In addition, we also
report an extensive evaluation in the large-scale continual semantic
segmentation scenario showing that CoMFormer also significantly outperforms
state-of-the-art methods.
Related papers
- ECLIPSE: Efficient Continual Learning in Panoptic Segmentation with Visual Prompt Tuning [54.68180752416519]
Panoptic segmentation is a cutting-edge computer vision task.
We introduce a novel and efficient method for continual panoptic segmentation based on Visual Prompt Tuning, dubbed ECLIPSE.
Our approach involves freezing the base model parameters and fine-tuning only a small set of prompt embeddings, addressing both catastrophic forgetting and plasticity.
arXiv Detail & Related papers (2024-03-29T11:31:12Z) - Continual Segmentation with Disentangled Objectness Learning and Class Recognition [19.23268063605072]
We propose CoMasTRe to disentangling continual segmentation into two stages: forgetting-resistant continual objectness learning and well-researched continual classification.
CoMasTRe uses a two-stage segmenter learning class-agnostic mask proposals at the first stage and leaving recognition to the second stage.
To further mitigate the forgetting of old classes, we design a multi-label class distillation strategy suited for segmentation.
arXiv Detail & Related papers (2024-03-06T05:33:50Z) - Harmonizing Base and Novel Classes: A Class-Contrastive Approach for
Generalized Few-Shot Segmentation [78.74340676536441]
We propose a class contrastive loss and a class relationship loss to regulate prototype updates and encourage a large distance between prototypes.
Our proposed approach achieves new state-of-the-art performance for the generalized few-shot segmentation task on PASCAL VOC and MS COCO datasets.
arXiv Detail & Related papers (2023-03-24T00:30:25Z) - Mining Unseen Classes via Regional Objectness: A Simple Baseline for
Incremental Segmentation [57.80416375466496]
Incremental or continual learning has been extensively studied for image classification tasks to alleviate catastrophic forgetting.
We propose a simple yet effective method in this paper, named unseen Classes via Regional Objectness for Mining (MicroSeg)
Our MicroSeg is based on the assumption that background regions with strong objectness possibly belong to those concepts in the historical or future stages.
In this way, the distribution characterizes of old concepts in the feature space could be better perceived, relieving the catastrophic forgetting caused by the background shift accordingly.
arXiv Detail & Related papers (2022-11-13T10:06:17Z) - Modeling the Background for Incremental and Weakly-Supervised Semantic
Segmentation [39.025848280224785]
We introduce a novel incremental class learning approach for semantic segmentation.
Since each training step provides annotation only for a subset of all possible classes, pixels of the background class exhibit a semantic shift.
We demonstrate the effectiveness of our approach with an extensive evaluation on the Pascal-VOC, ADE20K, and Cityscapes datasets.
arXiv Detail & Related papers (2022-01-31T16:33:21Z) - Leveraging Auxiliary Tasks with Affinity Learning for Weakly Supervised
Semantic Segmentation [88.49669148290306]
We propose a novel weakly supervised multi-task framework called AuxSegNet to leverage saliency detection and multi-label image classification as auxiliary tasks.
Inspired by their similar structured semantics, we also propose to learn a cross-task global pixel-level affinity map from the saliency and segmentation representations.
The learned cross-task affinity can be used to refine saliency predictions and propagate CAM maps to provide improved pseudo labels for both tasks.
arXiv Detail & Related papers (2021-07-25T11:39:58Z) - A Simple Baseline for Semi-supervised Semantic Segmentation with Strong
Data Augmentation [74.8791451327354]
We propose a simple yet effective semi-supervised learning framework for semantic segmentation.
A set of simple design and training techniques can collectively improve the performance of semi-supervised semantic segmentation significantly.
Our method achieves state-of-the-art results in the semi-supervised settings on the Cityscapes and Pascal VOC datasets.
arXiv Detail & Related papers (2021-04-15T06:01:39Z) - Continual Semantic Segmentation via Repulsion-Attraction of Sparse and
Disentangled Latent Representations [18.655840060559168]
This paper focuses on class incremental continual learning in semantic segmentation.
New categories are made available over time while previous training data is not retained.
The proposed continual learning scheme shapes the latent space to reduce forgetting whilst improving the recognition of novel classes.
arXiv Detail & Related papers (2021-03-10T21:02:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.