A Contrastive Distillation Approach for Incremental Semantic
Segmentation in Aerial Images
- URL: http://arxiv.org/abs/2112.03814v1
- Date: Tue, 7 Dec 2021 16:44:45 GMT
- Title: A Contrastive Distillation Approach for Incremental Semantic
Segmentation in Aerial Images
- Authors: Edoardo Arnaudo, Fabio Cermelli, Antonio Tavera, Claudio Rossi,
Barbara Caputo
- Abstract summary: A major issue concerning current deep neural architectures is known as catastrophic forgetting.
We propose a contrastive regularization, where any given input is compared with its augmented version.
We show the effectiveness of our solution on the Potsdam dataset, outperforming the incremental baseline in every test.
- Score: 15.75291664088815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Incremental learning represents a crucial task in aerial image processing,
especially given the limited availability of large-scale annotated datasets. A
major issue concerning current deep neural architectures is known as
catastrophic forgetting, namely the inability to faithfully maintain past
knowledge once a new set of data is provided for retraining. Over the years,
several techniques have been proposed to mitigate this problem for image
classification and object detection. However, only recently the focus has
shifted towards more complex downstream tasks such as instance or semantic
segmentation. Starting from incremental-class learning for semantic
segmentation tasks, our goal is to adapt this strategy to the aerial domain,
exploiting a peculiar feature that differentiates it from natural images,
namely the orientation. In addition to the standard knowledge distillation
approach, we propose a contrastive regularization, where any given input is
compared with its augmented version (i.e. flipping and rotations) in order to
minimize the difference between the segmentation features produced by both
inputs. We show the effectiveness of our solution on the Potsdam dataset,
outperforming the incremental baseline in every test. Code available at:
https://github.com/edornd/contrastive-distillation.
Related papers
- SatSynth: Augmenting Image-Mask Pairs through Diffusion Models for Aerial Semantic Segmentation [69.42764583465508]
We explore the potential of generative image diffusion to address the scarcity of annotated data in earth observation tasks.
To the best of our knowledge, we are the first to generate both images and corresponding masks for satellite segmentation.
arXiv Detail & Related papers (2024-03-25T10:30:22Z) - Cross-Level Distillation and Feature Denoising for Cross-Domain Few-Shot
Classification [49.36348058247138]
We tackle the problem of cross-domain few-shot classification by making a small proportion of unlabeled images in the target domain accessible in the training stage.
We meticulously design a cross-level knowledge distillation method, which can strengthen the ability of the model to extract more discriminative features in the target dataset.
Our approach can surpass the previous state-of-the-art method, Dynamic-Distillation, by 5.44% on 1-shot and 1.37% on 5-shot classification tasks.
arXiv Detail & Related papers (2023-11-04T12:28:04Z) - Few-shot Image Classification based on Gradual Machine Learning [6.935034849731568]
Few-shot image classification aims to accurately classify unlabeled images using only a few labeled samples.
We propose a novel approach based on the non-i.i.d paradigm of gradual machine learning (GML)
We show that the proposed approach can improve the SOTA performance by 1-5% in terms of accuracy.
arXiv Detail & Related papers (2023-07-28T12:30:41Z) - Uncertainty-aware Contrastive Distillation for Incremental Semantic
Segmentation [46.14545656625703]
catastrophic forgetting is the tendency of neural networks to fail to preserve the knowledge acquired from old tasks when learning new tasks.
We propose a novel distillation framework, Uncertainty-aware Contrastive Distillation (method)
Our results demonstrate the advantage of the proposed distillation technique, which can be used in synergy with previous IL approaches.
arXiv Detail & Related papers (2022-03-26T15:32:12Z) - SATS: Self-Attention Transfer for Continual Semantic Segmentation [50.51525791240729]
continual semantic segmentation suffers from the same catastrophic forgetting issue as in continual classification learning.
This study proposes to transfer a new type of information relevant to knowledge, i.e. the relationships between elements within each image.
The relationship information can be effectively obtained from the self-attention maps in a Transformer-style segmentation model.
arXiv Detail & Related papers (2022-03-15T06:09:28Z) - Modeling the Background for Incremental and Weakly-Supervised Semantic
Segmentation [39.025848280224785]
We introduce a novel incremental class learning approach for semantic segmentation.
Since each training step provides annotation only for a subset of all possible classes, pixels of the background class exhibit a semantic shift.
We demonstrate the effectiveness of our approach with an extensive evaluation on the Pascal-VOC, ADE20K, and Cityscapes datasets.
arXiv Detail & Related papers (2022-01-31T16:33:21Z) - A Simple Baseline for Semi-supervised Semantic Segmentation with Strong
Data Augmentation [74.8791451327354]
We propose a simple yet effective semi-supervised learning framework for semantic segmentation.
A set of simple design and training techniques can collectively improve the performance of semi-supervised semantic segmentation significantly.
Our method achieves state-of-the-art results in the semi-supervised settings on the Cityscapes and Pascal VOC datasets.
arXiv Detail & Related papers (2021-04-15T06:01:39Z) - A Few Guidelines for Incremental Few-Shot Segmentation [57.34237650765928]
Given a pretrained segmentation model and few images containing novel classes, our goal is to learn to segment novel classes while retaining the ability to segment previously seen ones.
We show how the main problems of end-to-end training in this scenario are.
i) the drift of the batch-normalization statistics toward novel classes that we can fix with batch renormalization and.
ii) the forgetting of old classes, that we can fix with regularization strategies.
arXiv Detail & Related papers (2020-11-30T20:45:56Z) - Mining Cross-Image Semantics for Weakly Supervised Semantic Segmentation [128.03739769844736]
Two neural co-attentions are incorporated into the classifier to capture cross-image semantic similarities and differences.
In addition to boosting object pattern learning, the co-attention can leverage context from other related images to improve localization map inference.
Our algorithm sets new state-of-the-arts on all these settings, demonstrating well its efficacy and generalizability.
arXiv Detail & Related papers (2020-07-03T21:53:46Z) - Modeling the Background for Incremental Learning in Semantic
Segmentation [39.025848280224785]
Deep architectures are vulnerable to catastrophic forgetting.
This paper addresses this problem in the context of semantic segmentation.
We propose a new distillation-based framework which explicitly accounts for this shift.
arXiv Detail & Related papers (2020-02-03T13:30:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.