Mining Unseen Classes via Regional Objectness: A Simple Baseline for
Incremental Segmentation
- URL: http://arxiv.org/abs/2211.06866v2
- Date: Tue, 15 Nov 2022 08:05:24 GMT
- Title: Mining Unseen Classes via Regional Objectness: A Simple Baseline for
Incremental Segmentation
- Authors: Zekang Zhang, Guangyu Gao, Zhiyuan Fang, Jianbo Jiao, Yunchao Wei
- Abstract summary: Incremental or continual learning has been extensively studied for image classification tasks to alleviate catastrophic forgetting.
We propose a simple yet effective method in this paper, named unseen Classes via Regional Objectness for Mining (MicroSeg)
Our MicroSeg is based on the assumption that background regions with strong objectness possibly belong to those concepts in the historical or future stages.
In this way, the distribution characterizes of old concepts in the feature space could be better perceived, relieving the catastrophic forgetting caused by the background shift accordingly.
- Score: 57.80416375466496
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Incremental or continual learning has been extensively studied for image
classification tasks to alleviate catastrophic forgetting, a phenomenon that
earlier learned knowledge is forgotten when learning new concepts. For class
incremental semantic segmentation, such a phenomenon often becomes much worse
due to the background shift, i.e., some concepts learned at previous stages are
assigned to the background class at the current training stage, therefore,
significantly reducing the performance of these old concepts. To address this
issue, we propose a simple yet effective method in this paper, named Mining
unseen Classes via Regional Objectness for Segmentation (MicroSeg). Our
MicroSeg is based on the assumption that background regions with strong
objectness possibly belong to those concepts in the historical or future
stages. Therefore, to avoid forgetting old knowledge at the current training
stage, our MicroSeg first splits the given image into hundreds of segment
proposals with a proposal generator. Those segment proposals with strong
objectness from the background are then clustered and assigned newly-defined
labels during the optimization. In this way, the distribution characterizes of
old concepts in the feature space could be better perceived, relieving the
catastrophic forgetting caused by the background shift accordingly. Extensive
experiments on Pascal VOC and ADE20K datasets show competitive results with
state-of-the-art, well validating the effectiveness of the proposed MicroSeg.
Related papers
- Mitigating Background Shift in Class-Incremental Semantic Segmentation [18.604420743751643]
Class-Incremental Semantic(CISS) aims to learn new classes without forgetting the old ones.
We propose a background-class separation framework for CISS.
arXiv Detail & Related papers (2024-07-16T15:44:37Z) - Tendency-driven Mutual Exclusivity for Weakly Supervised Incremental Semantic Segmentation [56.1776710527814]
Weakly Incremental Learning for Semantic (WILSS) leverages a pre-trained segmentation model to segment new classes using cost-effective and readily available image-level labels.
A prevailing way to solve WILSS is the generation of seed areas for each new class, serving as a form of pixel-level supervision.
We propose an innovative, tendency-driven relationship of mutual exclusivity, meticulously tailored to govern the behavior of the seed areas.
arXiv Detail & Related papers (2024-04-18T08:23:24Z) - Attribution-aware Weight Transfer: A Warm-Start Initialization for
Class-Incremental Semantic Segmentation [38.52441363934223]
In class-incremental semantic segmentation (CISS), deep learning architectures suffer from the critical problems of catastrophic forgetting and semantic background shift.
We propose a novel method which employs gradient-based attribution to identify the most relevant weights for new classes.
Our experiments demonstrate significant improvement in mIoU compared to the state-of-the-art CISS methods on the Pascal-VOC 2012, ADE20K and Cityscapes datasets.
arXiv Detail & Related papers (2022-10-13T17:32:12Z) - Self-Supervised Video Object Segmentation via Cutout Prediction and
Tagging [117.73967303377381]
We propose a novel self-supervised Video Object (VOS) approach that strives to achieve better object-background discriminability.
Our approach is based on a discriminative learning loss formulation that takes into account both object and background information.
Our proposed approach, CT-VOS, achieves state-of-the-art results on two challenging benchmarks: DAVIS-2017 and Youtube-VOS.
arXiv Detail & Related papers (2022-04-22T17:53:27Z) - Modeling the Background for Incremental and Weakly-Supervised Semantic
Segmentation [39.025848280224785]
We introduce a novel incremental class learning approach for semantic segmentation.
Since each training step provides annotation only for a subset of all possible classes, pixels of the background class exhibit a semantic shift.
We demonstrate the effectiveness of our approach with an extensive evaluation on the Pascal-VOC, ADE20K, and Cityscapes datasets.
arXiv Detail & Related papers (2022-01-31T16:33:21Z) - A Simple Baseline for Semi-supervised Semantic Segmentation with Strong
Data Augmentation [74.8791451327354]
We propose a simple yet effective semi-supervised learning framework for semantic segmentation.
A set of simple design and training techniques can collectively improve the performance of semi-supervised semantic segmentation significantly.
Our method achieves state-of-the-art results in the semi-supervised settings on the Cityscapes and Pascal VOC datasets.
arXiv Detail & Related papers (2021-04-15T06:01:39Z) - Unsupervised Semantic Segmentation by Contrasting Object Mask Proposals [78.12377360145078]
We introduce a novel two-step framework that adopts a predetermined prior in a contrastive optimization objective to learn pixel embeddings.
This marks a large deviation from existing works that relied on proxy tasks or end-to-end clustering.
In particular, when fine-tuning the learned representations using just 1% of labeled examples on PASCAL, we outperform supervised ImageNet pre-training by 7.1% mIoU.
arXiv Detail & Related papers (2021-02-11T18:54:47Z) - A Few Guidelines for Incremental Few-Shot Segmentation [57.34237650765928]
Given a pretrained segmentation model and few images containing novel classes, our goal is to learn to segment novel classes while retaining the ability to segment previously seen ones.
We show how the main problems of end-to-end training in this scenario are.
i) the drift of the batch-normalization statistics toward novel classes that we can fix with batch renormalization and.
ii) the forgetting of old classes, that we can fix with regularization strategies.
arXiv Detail & Related papers (2020-11-30T20:45:56Z) - Modeling the Background for Incremental Learning in Semantic
Segmentation [39.025848280224785]
Deep architectures are vulnerable to catastrophic forgetting.
This paper addresses this problem in the context of semantic segmentation.
We propose a new distillation-based framework which explicitly accounts for this shift.
arXiv Detail & Related papers (2020-02-03T13:30:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.