Evidential Deep Learning for Class-Incremental Semantic Segmentation
- URL: http://arxiv.org/abs/2212.02863v1
- Date: Tue, 6 Dec 2022 10:13:30 GMT
- Title: Evidential Deep Learning for Class-Incremental Semantic Segmentation
- Authors: Karl Holmquist, Lena Klas\'en, Michael Felsberg
- Abstract summary: Class-Incremental Learning is a challenging problem in machine learning that aims to extend previously trained neural networks with new classes.
In this paper, we address the problem of how to model unlabeled classes while avoiding spurious feature clustering of future uncorrelated classes.
Our method factorizes the problem into a separate foreground class probability, calculated by the expected value of the Dirichlet distribution, and an unknown class (background) probability corresponding to the uncertainty of the estimate.
- Score: 15.563703446465823
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Class-Incremental Learning is a challenging problem in machine learning that
aims to extend previously trained neural networks with new classes. This is
especially useful if the system is able to classify new objects despite the
original training data being unavailable. While the semantic segmentation
problem has received less attention than classification, it poses distinct
problems and challenges since previous and future target classes can be
unlabeled in the images of a single increment. In this case, the background,
past and future classes are correlated and there exist a background-shift. In
this paper, we address the problem of how to model unlabeled classes while
avoiding spurious feature clustering of future uncorrelated classes. We propose
to use Evidential Deep Learning to model the evidence of the classes as a
Dirichlet distribution. Our method factorizes the problem into a separate
foreground class probability, calculated by the expected value of the Dirichlet
distribution, and an unknown class (background) probability corresponding to
the uncertainty of the estimate. In our novel formulation, the background
probability is implicitly modeled, avoiding the feature space clustering that
comes from forcing the model to output a high background score for pixels that
are not labeled as objects. Experiments on the incremental Pascal VOC, and
ADE20k benchmarks show that our method is superior to state-of-the-art,
especially when repeatedly learning new classes with increasing number of
increments.
Related papers
- Covariance-based Space Regularization for Few-shot Class Incremental Learning [25.435192867105552]
Few-shot Class Incremental Learning (FSCIL) requires the model to continually learn new classes with limited labeled data.
Due to the limited data in incremental sessions, models are prone to overfitting new classes and suffering catastrophic forgetting of base classes.
Recent advancements resort to prototype-based approaches to constrain the base class distribution and learn discriminative representations of new classes.
arXiv Detail & Related papers (2024-11-02T08:03:04Z) - Happy: A Debiased Learning Framework for Continual Generalized Category Discovery [54.54153155039062]
This paper explores the underexplored task of Continual Generalized Category Discovery (C-GCD)
C-GCD aims to incrementally discover new classes from unlabeled data while maintaining the ability to recognize previously learned classes.
We introduce a debiased learning framework, namely Happy, characterized by Hardness-aware prototype sampling and soft entropy regularization.
arXiv Detail & Related papers (2024-10-09T04:18:51Z) - Tendency-driven Mutual Exclusivity for Weakly Supervised Incremental Semantic Segmentation [56.1776710527814]
Weakly Incremental Learning for Semantic (WILSS) leverages a pre-trained segmentation model to segment new classes using cost-effective and readily available image-level labels.
A prevailing way to solve WILSS is the generation of seed areas for each new class, serving as a form of pixel-level supervision.
We propose an innovative, tendency-driven relationship of mutual exclusivity, meticulously tailored to govern the behavior of the seed areas.
arXiv Detail & Related papers (2024-04-18T08:23:24Z) - Few-Shot Class-Incremental Learning via Training-Free Prototype
Calibration [67.69532794049445]
We find a tendency for existing methods to misclassify the samples of new classes into base classes, which leads to the poor performance of new classes.
We propose a simple yet effective Training-frEE calibratioN (TEEN) strategy to enhance the discriminability of new classes.
arXiv Detail & Related papers (2023-12-08T18:24:08Z) - RanPAC: Random Projections and Pre-trained Models for Continual Learning [59.07316955610658]
Continual learning (CL) aims to learn different tasks (such as classification) in a non-stationary data stream without forgetting old ones.
We propose a concise and effective approach for CL with pre-trained models.
arXiv Detail & Related papers (2023-07-05T12:49:02Z) - Generalization Bounds for Few-Shot Transfer Learning with Pretrained
Classifiers [26.844410679685424]
We study the ability of foundation models to learn representations for classification that are transferable to new, unseen classes.
We show that the few-shot error of the learned feature map on new classes is small in case of class-feature-variability collapse.
arXiv Detail & Related papers (2022-12-23T18:46:05Z) - GMM-IL: Image Classification using Incrementally Learnt, Independent
Probabilistic Models for Small Sample Sizes [0.4511923587827301]
We present a novel two stage architecture which couples visual feature learning with probabilistic models to represent each class.
We outperform a benchmark of an equivalent network with a Softmax head, obtaining increased accuracy for sample sizes smaller than 12 and increased weighted F1 score for 3 imbalanced class profiles.
arXiv Detail & Related papers (2022-12-01T15:19:42Z) - Few-shot Open-set Recognition Using Background as Unknowns [58.04165813493666]
Few-shot open-set recognition aims to classify both seen and novel images given only limited training data of seen classes.
Our proposed method not only outperforms multiple baselines but also sets new results on three popular benchmarks.
arXiv Detail & Related papers (2022-07-19T04:19:29Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Class-incremental Learning with Pre-allocated Fixed Classifiers [20.74548175713497]
In class-incremental learning, a learning agent faces a stream of data with the goal of learning new classes while not forgetting previous ones.
We propose a novel fixed classifier in which a number of pre-allocated output nodes are subject to the classification loss right from the beginning of the learning phase.
arXiv Detail & Related papers (2020-10-16T22:40:28Z) - Modeling the Background for Incremental Learning in Semantic
Segmentation [39.025848280224785]
Deep architectures are vulnerable to catastrophic forgetting.
This paper addresses this problem in the context of semantic segmentation.
We propose a new distillation-based framework which explicitly accounts for this shift.
arXiv Detail & Related papers (2020-02-03T13:30:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.