Incremental Few-Shot Semantic Segmentation via Embedding Adaptive-Update
and Hyper-class Representation
- URL: http://arxiv.org/abs/2207.12964v1
- Date: Tue, 26 Jul 2022 15:20:07 GMT
- Title: Incremental Few-Shot Semantic Segmentation via Embedding Adaptive-Update
and Hyper-class Representation
- Authors: Guangchen Shi, Yirui Wu, Jun Liu, Shaohua Wan, Wenhai Wang, Tong Lu
- Abstract summary: EHNet achieves new state-of-the-art performance with remarkable advantages.
Experiments on PASCAL-5i and COCO datasets show that EHNet achieves new state-of-the-art performance with remarkable advantages.
- Score: 30.558312809285905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Incremental few-shot semantic segmentation (IFSS) targets at incrementally
expanding model's capacity to segment new class of images supervised by only a
few samples. However, features learned on old classes could significantly
drift, causing catastrophic forgetting. Moreover, few samples for pixel-level
segmentation on new classes lead to notorious overfitting issues in each
learning session. In this paper, we explicitly represent class-based knowledge
for semantic segmentation as a category embedding and a hyper-class embedding,
where the former describes exclusive semantical properties, and the latter
expresses hyper-class knowledge as class-shared semantic properties. Aiming to
solve IFSS problems, we present EHNet, i.e., Embedding adaptive-update and
Hyper-class representation Network from two aspects. First, we propose an
embedding adaptive-update strategy to avoid feature drift, which maintains old
knowledge by hyper-class representation, and adaptively update category
embeddings with a class-attention scheme to involve new classes learned in
individual sessions. Second, to resist overfitting issues caused by few
training samples, a hyper-class embedding is learned by clustering all category
embeddings for initialization and aligned with category embedding of the new
class for enhancement, where learned knowledge assists to learn new knowledge,
thus alleviating performance dependence on training data scale. Significantly,
these two designs provide representation capability for classes with sufficient
semantics and limited biases, enabling to perform segmentation tasks requiring
high semantic dependence. Experiments on PASCAL-5i and COCO datasets show that
EHNet achieves new state-of-the-art performance with remarkable advantages.
Related papers
- Dual Consolidation for Pre-Trained Model-Based Domain-Incremental Learning [64.1745161657794]
Domain-Incremental Learning (DIL) involves the progressive adaptation of a model to new concepts across different domains.
Recent advances in pre-trained models provide a solid foundation for DIL.
However, learning new concepts often results in the catastrophic forgetting of pre-trained knowledge.
We propose DUal ConsolidaTion (Duct) to unify and consolidate historical knowledge.
arXiv Detail & Related papers (2024-10-01T17:58:06Z) - Cs2K: Class-specific and Class-shared Knowledge Guidance for Incremental Semantic Segmentation [31.82132159867097]
Incremental semantic segmentation endeavors to segment newly encountered classes while maintaining knowledge of old classes.
We propose the Class-specific and Class-shared Knowledge (Cs2K) guidance for incremental semantic segmentation.
Our proposed Cs2K significantly improves segmentation performance and is plug-and-play.
arXiv Detail & Related papers (2024-07-12T07:15:26Z) - Dynamic Feature Learning and Matching for Class-Incremental Learning [20.432575325147894]
Class-incremental learning (CIL) has emerged as a means to learn new classes without catastrophic forgetting of previous classes.
We propose the Dynamic Feature Learning and Matching (DFLM) model in this paper.
Our proposed model achieves significant performance improvements over existing methods.
arXiv Detail & Related papers (2024-05-14T12:17:19Z) - Tendency-driven Mutual Exclusivity for Weakly Supervised Incremental Semantic Segmentation [56.1776710527814]
Weakly Incremental Learning for Semantic (WILSS) leverages a pre-trained segmentation model to segment new classes using cost-effective and readily available image-level labels.
A prevailing way to solve WILSS is the generation of seed areas for each new class, serving as a form of pixel-level supervision.
We propose an innovative, tendency-driven relationship of mutual exclusivity, meticulously tailored to govern the behavior of the seed areas.
arXiv Detail & Related papers (2024-04-18T08:23:24Z) - Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning [65.57123249246358]
We propose ExpAndable Subspace Ensemble (EASE) for PTM-based CIL.
We train a distinct lightweight adapter module for each new task, aiming to create task-specific subspaces.
Our prototype complement strategy synthesizes old classes' new features without using any old class instance.
arXiv Detail & Related papers (2024-03-18T17:58:13Z) - RaSP: Relation-aware Semantic Prior for Weakly Supervised Incremental
Segmentation [28.02204928717511]
We propose a weakly supervised approach to transfer objectness prior from the previously learned classes into the new ones.
We show how even a simple pairwise interaction between classes can significantly improve the segmentation mask quality of both old and new classes.
arXiv Detail & Related papers (2023-05-31T14:14:21Z) - Advancing Incremental Few-shot Semantic Segmentation via Semantic-guided
Relation Alignment and Adaptation [98.51938442785179]
Incremental few-shot semantic segmentation aims to incrementally extend a semantic segmentation model to novel classes.
This task faces a severe semantic-aliasing issue between base and novel classes due to data imbalance.
We propose the Semantic-guided Relation Alignment and Adaptation (SRAA) method that fully considers the guidance of prior semantic information.
arXiv Detail & Related papers (2023-05-18T10:40:52Z) - Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks [59.12108527904171]
A model should recognize new classes and maintain discriminability over old classes.
The task of recognizing few-shot new classes without forgetting old classes is called few-shot class-incremental learning (FSCIL)
We propose a new paradigm for FSCIL based on meta-learning by LearnIng Multi-phase Incremental Tasks (LIMIT)
arXiv Detail & Related papers (2022-03-31T13:46:41Z) - Incremental Few-Shot Learning via Implanting and Compressing [13.122771115838523]
Incremental Few-Shot Learning requires a model to continually learn novel classes from only a few examples.
We propose a two-step learning strategy referred to as textbfImplanting and textbfCompressing.
Specifically, in the textbfImplanting step, we propose to mimic the data distribution of novel classes with the assistance of data-abundant base set.
In the textbf step, we adapt the feature extractor to precisely represent each novel class for enhancing intra-class compactness.
arXiv Detail & Related papers (2022-03-19T11:04:43Z) - Novel Class Discovery in Semantic Segmentation [104.30729847367104]
We introduce a new setting of Novel Class Discovery in Semantic (NCDSS)
It aims at segmenting unlabeled images containing new classes given prior knowledge from a labeled set of disjoint classes.
In NCDSS, we need to distinguish the objects and background, and to handle the existence of multiple classes within an image.
We propose the Entropy-based Uncertainty Modeling and Self-training (EUMS) framework to overcome noisy pseudo-labels.
arXiv Detail & Related papers (2021-12-03T13:31:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.