Continual Semantic Segmentation via Repulsion-Attraction of Sparse and
Disentangled Latent Representations
- URL: http://arxiv.org/abs/2103.06342v1
- Date: Wed, 10 Mar 2021 21:02:05 GMT
- Title: Continual Semantic Segmentation via Repulsion-Attraction of Sparse and
Disentangled Latent Representations
- Authors: Umberto Michieli and Pietro Zanuttigh
- Abstract summary: This paper focuses on class incremental continual learning in semantic segmentation.
New categories are made available over time while previous training data is not retained.
The proposed continual learning scheme shapes the latent space to reduce forgetting whilst improving the recognition of novel classes.
- Score: 18.655840060559168
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep neural networks suffer from the major limitation of catastrophic
forgetting old tasks when learning new ones. In this paper we focus on class
incremental continual learning in semantic segmentation, where new categories
are made available over time while previous training data is not retained. The
proposed continual learning scheme shapes the latent space to reduce forgetting
whilst improving the recognition of novel classes. Our framework is driven by
three novel components which we also combine on top of existing techniques
effortlessly. First, prototypes matching enforces latent space consistency on
old classes, constraining the encoder to produce similar latent representation
for previously seen classes in the subsequent steps. Second, features
sparsification allows to make room in the latent space to accommodate novel
classes. Finally, contrastive learning is employed to cluster features
according to their semantics while tearing apart those of different classes.
Extensive evaluation on the Pascal VOC2012 and ADE20K datasets demonstrates the
effectiveness of our approach, significantly outperforming state-of-the-art
methods.
Related papers
- Efficient Non-Exemplar Class-Incremental Learning with Retrospective Feature Synthesis [21.348252135252412]
Current Non-Exemplar Class-Incremental Learning (NECIL) methods mitigate forgetting by storing a single prototype per class.
We propose a more efficient NECIL method that replaces prototypes with synthesized retrospective features for old classes.
Our method significantly improves the efficiency of non-exemplar class-incremental learning and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-11-03T07:19:11Z) - Tendency-driven Mutual Exclusivity for Weakly Supervised Incremental Semantic Segmentation [56.1776710527814]
Weakly Incremental Learning for Semantic (WILSS) leverages a pre-trained segmentation model to segment new classes using cost-effective and readily available image-level labels.
A prevailing way to solve WILSS is the generation of seed areas for each new class, serving as a form of pixel-level supervision.
We propose an innovative, tendency-driven relationship of mutual exclusivity, meticulously tailored to govern the behavior of the seed areas.
arXiv Detail & Related papers (2024-04-18T08:23:24Z) - ECLIPSE: Efficient Continual Learning in Panoptic Segmentation with Visual Prompt Tuning [54.68180752416519]
Panoptic segmentation is a cutting-edge computer vision task.
We introduce a novel and efficient method for continual panoptic segmentation based on Visual Prompt Tuning, dubbed ECLIPSE.
Our approach involves freezing the base model parameters and fine-tuning only a small set of prompt embeddings, addressing both catastrophic forgetting and plasticity.
arXiv Detail & Related papers (2024-03-29T11:31:12Z) - Activating the Discriminability of Novel Classes for Few-shot
Segmentation [48.542627940781095]
We propose to activate the discriminability of novel classes explicitly in both the feature encoding stage and the prediction stage for segmentation.
In the prediction stage for segmentation, we learn an Self-Refined Online Foreground-Background classifier (SROFB), which is able to refine itself using the high-confidence pixels of query image.
arXiv Detail & Related papers (2022-12-02T12:22:36Z) - Prototypical quadruplet for few-shot class incremental learning [24.814045065163135]
We propose a novel method that improves classification robustness by identifying a better embedding space using an improved contrasting loss.
Our approach retains previously acquired knowledge in the embedding space, even when trained with new classes.
We demonstrate the effectiveness of our method by showing that the embedding space remains intact after training the model with new classes and outperforms existing state-of-the-art algorithms in terms of accuracy across different sessions.
arXiv Detail & Related papers (2022-11-05T17:19:14Z) - Continual Attentive Fusion for Incremental Learning in Semantic
Segmentation [43.98082955427662]
Deep architectures trained with gradient-based techniques suffer from catastrophic forgetting.
We introduce a novel attentive feature distillation approach to mitigate catastrophic forgetting.
We also introduce a novel strategy to account for the background class in the distillation loss, thus preventing biased predictions.
arXiv Detail & Related papers (2022-02-01T14:38:53Z) - Modeling the Background for Incremental and Weakly-Supervised Semantic
Segmentation [39.025848280224785]
We introduce a novel incremental class learning approach for semantic segmentation.
Since each training step provides annotation only for a subset of all possible classes, pixels of the background class exhibit a semantic shift.
We demonstrate the effectiveness of our approach with an extensive evaluation on the Pascal-VOC, ADE20K, and Cityscapes datasets.
arXiv Detail & Related papers (2022-01-31T16:33:21Z) - Few-Shot Object Detection via Association and DIscrimination [83.8472428718097]
Few-shot object detection via Association and DIscrimination builds up a discriminative feature space for each novel class with two integral steps.
Experiments on Pascal VOC and MS-COCO datasets demonstrate FADI achieves new SOTA performance, significantly improving the baseline in any shot/split by +18.7.
arXiv Detail & Related papers (2021-11-23T05:04:06Z) - Flip Learning: Erase to Segment [65.84901344260277]
Weakly-supervised segmentation (WSS) can help reduce time-consuming and cumbersome manual annotation.
We propose a novel and general WSS framework called Flip Learning, which only needs the box annotation.
Our proposed approach achieves competitive performance and shows great potential to narrow the gap between fully-supervised and weakly-supervised learning.
arXiv Detail & Related papers (2021-08-02T09:56:10Z) - Incremental Embedding Learning via Zero-Shot Translation [65.94349068508863]
Current state-of-the-art incremental learning methods tackle catastrophic forgetting problem in traditional classification networks.
We propose a novel class-incremental method for embedding network, named as zero-shot translation class-incremental method (ZSTCI)
In addition, ZSTCI can easily be combined with existing regularization-based incremental learning methods to further improve performance of embedding networks.
arXiv Detail & Related papers (2020-12-31T08:21:37Z) - Class-Incremental Learning for Semantic Segmentation Re-Using Neither
Old Data Nor Old Labels [35.586031601299034]
We present a technique implementing class-incremental learning for semantic segmentation without using the labeled data the model was initially trained on.
We show how to overcome these problems with a novel class-incremental learning technique, which nonetheless requires labels only for the new classes.
We evaluate our method on the Cityscapes dataset, where we exceed the mIoU performance of all baselines by 3.5% absolute reaching a result.
arXiv Detail & Related papers (2020-05-12T21:03:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.