Uncertainty-aware Contrastive Distillation for Incremental Semantic
Segmentation
- URL: http://arxiv.org/abs/2203.14098v1
- Date: Sat, 26 Mar 2022 15:32:12 GMT
- Title: Uncertainty-aware Contrastive Distillation for Incremental Semantic
Segmentation
- Authors: Guanglei Yang, Enrico Fini, Dan Xu, Paolo Rota, Mingli Ding, Moin
Nabi, Xavier Alameda-Pineda, Elisa Ricci
- Abstract summary: catastrophic forgetting is the tendency of neural networks to fail to preserve the knowledge acquired from old tasks when learning new tasks.
We propose a novel distillation framework, Uncertainty-aware Contrastive Distillation (method)
Our results demonstrate the advantage of the proposed distillation technique, which can be used in synergy with previous IL approaches.
- Score: 46.14545656625703
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A fundamental and challenging problem in deep learning is catastrophic
forgetting, i.e. the tendency of neural networks to fail to preserve the
knowledge acquired from old tasks when learning new tasks. This problem has
been widely investigated in the research community and several Incremental
Learning (IL) approaches have been proposed in the past years. While earlier
works in computer vision have mostly focused on image classification and object
detection, more recently some IL approaches for semantic segmentation have been
introduced. These previous works showed that, despite its simplicity, knowledge
distillation can be effectively employed to alleviate catastrophic forgetting.
In this paper, we follow this research direction and, inspired by recent
literature on contrastive learning, we propose a novel distillation framework,
Uncertainty-aware Contrastive Distillation (\method). In a nutshell, \method~is
operated by introducing a novel distillation loss that takes into account all
the images in a mini-batch, enforcing similarity between features associated to
all the pixels from the same classes, and pulling apart those corresponding to
pixels from different classes. In order to mitigate catastrophic forgetting, we
contrast features of the new model with features extracted by a frozen model
learned at the previous incremental step. Our experimental results demonstrate
the advantage of the proposed distillation technique, which can be used in
synergy with previous IL approaches, and leads to state-of-art performance on
three commonly adopted benchmarks for incremental semantic segmentation. The
code is available at \url{https://github.com/ygjwd12345/UCD}.
Related papers
- ECLIPSE: Efficient Continual Learning in Panoptic Segmentation with Visual Prompt Tuning [54.68180752416519]
Panoptic segmentation is a cutting-edge computer vision task.
We introduce a novel and efficient method for continual panoptic segmentation based on Visual Prompt Tuning, dubbed ECLIPSE.
Our approach involves freezing the base model parameters and fine-tuning only a small set of prompt embeddings, addressing both catastrophic forgetting and plasticity.
arXiv Detail & Related papers (2024-03-29T11:31:12Z) - Fine-Grained Knowledge Selection and Restoration for Non-Exemplar Class
Incremental Learning [64.14254712331116]
Non-exemplar class incremental learning aims to learn both the new and old tasks without accessing any training data from the past.
We propose a novel framework of fine-grained knowledge selection and restoration.
arXiv Detail & Related papers (2023-12-20T02:34:11Z) - Multi-to-Single Knowledge Distillation for Point Cloud Semantic
Segmentation [41.02741249858771]
We propose a novel multi-to-single knowledge distillation framework for the 3D point cloud semantic segmentation task.
Instead of fusing all the points of multi-scans directly, only the instances that belong to the previously defined hard classes are fused.
arXiv Detail & Related papers (2023-04-28T12:17:08Z) - Continual Attentive Fusion for Incremental Learning in Semantic
Segmentation [43.98082955427662]
Deep architectures trained with gradient-based techniques suffer from catastrophic forgetting.
We introduce a novel attentive feature distillation approach to mitigate catastrophic forgetting.
We also introduce a novel strategy to account for the background class in the distillation loss, thus preventing biased predictions.
arXiv Detail & Related papers (2022-02-01T14:38:53Z) - A Contrastive Distillation Approach for Incremental Semantic
Segmentation in Aerial Images [15.75291664088815]
A major issue concerning current deep neural architectures is known as catastrophic forgetting.
We propose a contrastive regularization, where any given input is compared with its augmented version.
We show the effectiveness of our solution on the Potsdam dataset, outperforming the incremental baseline in every test.
arXiv Detail & Related papers (2021-12-07T16:44:45Z) - Continual Semantic Segmentation via Repulsion-Attraction of Sparse and
Disentangled Latent Representations [18.655840060559168]
This paper focuses on class incremental continual learning in semantic segmentation.
New categories are made available over time while previous training data is not retained.
The proposed continual learning scheme shapes the latent space to reduce forgetting whilst improving the recognition of novel classes.
arXiv Detail & Related papers (2021-03-10T21:02:05Z) - Few-shot Action Recognition with Prototype-centered Attentive Learning [88.10852114988829]
Prototype-centered Attentive Learning (PAL) model composed of two novel components.
First, a prototype-centered contrastive learning loss is introduced to complement the conventional query-centered learning objective.
Second, PAL integrates a attentive hybrid learning mechanism that can minimize the negative impacts of outliers.
arXiv Detail & Related papers (2021-01-20T11:48:12Z) - Contrastive Rendering for Ultrasound Image Segmentation [59.23915581079123]
The lack of sharp boundaries in US images remains an inherent challenge for segmentation.
We propose a novel and effective framework to improve boundary estimation in US images.
Our proposed method outperforms state-of-the-art methods and has the potential to be used in clinical practice.
arXiv Detail & Related papers (2020-10-10T07:14:03Z) - Two-Level Residual Distillation based Triple Network for Incremental
Object Detection [21.725878050355824]
We propose a novel incremental object detector based on Faster R-CNN to continuously learn from new object classes without using old data.
It is a triple network where an old model and a residual model as assistants for helping the incremental model learning on new classes without forgetting the previous learned knowledge.
arXiv Detail & Related papers (2020-07-27T11:04:57Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.