Interclass Prototype Relation for Few-Shot Segmentation
- URL: http://arxiv.org/abs/2211.08681v1
- Date: Wed, 16 Nov 2022 05:27:52 GMT
- Title: Interclass Prototype Relation for Few-Shot Segmentation
- Authors: Atsuro Okazawa
- Abstract summary: With few-shot segmentation, the target class data distribution in the feature space is sparse and has low coverage because of the slight variations in the sample data.
This study proposes the Interclass Prototype Relation Network (IPRNet) which improves the separation performance by reducing the similarity between other classes.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional semantic segmentation requires a large labeled image dataset and
can only be predicted within predefined classes. To solve this problem,
few-shot segmentation, which requires only a handful of annotations for the new
target class, is important. However, with few-shot segmentation, the target
class data distribution in the feature space is sparse and has low coverage
because of the slight variations in the sample data. Setting the classification
boundary that properly separates the target class from other classes is an
impossible task. In particular, it is difficult to classify classes that are
similar to the target class near the boundary. This study proposes the
Interclass Prototype Relation Network (IPRNet), which improves the separation
performance by reducing the similarity between other classes. We conducted
extensive experiments with Pascal-5i and COCO-20i and showed that IPRNet
provides the best segmentation performance compared with previous research.
Related papers
- Relevant Intrinsic Feature Enhancement Network for Few-Shot Semantic
Segmentation [34.257289290796315]
We propose the Relevant Intrinsic Feature Enhancement Network (RiFeNet) to improve semantic consistency of foreground instances.
RiFeNet surpasses the state-of-the-art methods on PASCAL-5i and COCO benchmarks.
arXiv Detail & Related papers (2023-12-11T16:02:57Z) - PDiscoNet: Semantically consistent part discovery for fine-grained
recognition [62.12602920807109]
We propose PDiscoNet to discover object parts by using only image-level class labels along with priors encouraging the parts to be.
Our results on CUB, CelebA, and PartImageNet show that the proposed method provides substantially better part discovery performance than previous methods.
arXiv Detail & Related papers (2023-09-06T17:19:29Z) - Efficient Subclass Segmentation in Medical Images [3.383033695275859]
One feasible way to reduce the cost is to annotate with coarse-grained superclass labels while using limited fine-grained annotations as a complement.
There is a lack of research on efficient learning of fine-grained subclasses in semantic segmentation tasks.
Our approach achieves comparable accuracy to a model trained with full subclass annotations, with limited subclass annotations and sufficient superclass annotations.
arXiv Detail & Related papers (2023-07-01T07:39:08Z) - Harmonizing Base and Novel Classes: A Class-Contrastive Approach for
Generalized Few-Shot Segmentation [78.74340676536441]
We propose a class contrastive loss and a class relationship loss to regulate prototype updates and encourage a large distance between prototypes.
Our proposed approach achieves new state-of-the-art performance for the generalized few-shot segmentation task on PASCAL VOC and MS COCO datasets.
arXiv Detail & Related papers (2023-03-24T00:30:25Z) - APANet: Adaptive Prototypes Alignment Network for Few-Shot Semantic
Segmentation [56.387647750094466]
Few-shot semantic segmentation aims to segment novel-class objects in a given query image with only a few labeled support images.
Most advanced solutions exploit a metric learning framework that performs segmentation through matching each query feature to a learned class-specific prototype.
We present an adaptive prototype representation by introducing class-specific and class-agnostic prototypes.
arXiv Detail & Related papers (2021-11-24T04:38:37Z) - Dual Prototypical Contrastive Learning for Few-shot Semantic
Segmentation [55.339405417090084]
We propose a dual prototypical contrastive learning approach tailored to the few-shot semantic segmentation (FSS) task.
The main idea is to encourage the prototypes more discriminative by increasing inter-class distance while reducing intra-class distance in prototype feature space.
We demonstrate that the proposed dual contrastive learning approach outperforms state-of-the-art FSS methods on PASCAL-5i and COCO-20i datasets.
arXiv Detail & Related papers (2021-11-09T08:14:50Z) - Multi-dataset Pretraining: A Unified Model for Semantic Segmentation [97.61605021985062]
We propose a unified framework, termed as Multi-Dataset Pretraining, to take full advantage of the fragmented annotations of different datasets.
This is achieved by first pretraining the network via the proposed pixel-to-prototype contrastive loss over multiple datasets.
In order to better model the relationship among images and classes from different datasets, we extend the pixel level embeddings via cross dataset mixing.
arXiv Detail & Related papers (2021-06-08T06:13:11Z) - Scaling Semantic Segmentation Beyond 1K Classes on a Single GPU [87.48110331544885]
We propose a novel training methodology to train and scale the existing semantic segmentation models.
We demonstrate a clear benefit of our approach on a dataset with 1284 classes, bootstrapped from LVIS and COCO annotations, with three times better mIoU than the DeeplabV3+ model.
arXiv Detail & Related papers (2020-12-14T13:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.