Elimination of Non-Novel Segments at Multi-Scale for Few-Shot
Segmentation
- URL: http://arxiv.org/abs/2211.02300v1
- Date: Fri, 4 Nov 2022 07:52:54 GMT
- Title: Elimination of Non-Novel Segments at Multi-Scale for Few-Shot
Segmentation
- Authors: Alper Kayaba\c{s}{\i}, G\"ulin T\"ufekci, \.Ilkay Ulusoy
- Abstract summary: Few-shot segmentation aims to devise a generalizing model that segments query images from unseen classes during training.
We simultaneously address two vital problems for the first time and achieve state-of-the-art performances on both PASCAL-5i and COCO-20i datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-shot segmentation aims to devise a generalizing model that segments query
images from unseen classes during training with the guidance of a few support
images whose class tally with the class of the query. There exist two
domain-specific problems mentioned in the previous works, namely spatial
inconsistency and bias towards seen classes. Taking the former problem into
account, our method compares the support feature map with the query feature map
at multi scales to become scale-agnostic. As a solution to the latter problem,
a supervised model, called as base learner, is trained on available classes to
accurately identify pixels belonging to seen classes. Hence, subsequent meta
learner has a chance to discard areas belonging to seen classes with the help
of an ensemble learning model that coordinates meta learner with the base
learner. We simultaneously address these two vital problems for the first time
and achieve state-of-the-art performances on both PASCAL-5i and COCO-20i
datasets.
Related papers
- A Joint Framework Towards Class-aware and Class-agnostic Alignment for
Few-shot Segmentation [11.47479526463185]
Few-shot segmentation aims to segment objects of unseen classes given only a few annotated support images.
Most existing methods simply stitch query features with independent support prototypes and segment the query image by feeding the mixed features to a decoder.
We propose a joint framework that combines more valuable class-aware and class-agnostic alignment guidance to facilitate the segmentation.
arXiv Detail & Related papers (2022-11-02T17:33:25Z) - Few-shot Open-set Recognition Using Background as Unknowns [58.04165813493666]
Few-shot open-set recognition aims to classify both seen and novel images given only limited training data of seen classes.
Our proposed method not only outperforms multiple baselines but also sets new results on three popular benchmarks.
arXiv Detail & Related papers (2022-07-19T04:19:29Z) - Integrative Few-Shot Learning for Classification and Segmentation [37.50821005917126]
We introduce the integrative task of few-shot classification and segmentation (FS-CS)
FS-CS aims to classify and segment target objects in a query image when the target classes are given with a few examples.
We propose the integrative few-shot learning framework for FS-CS, which trains a learner to construct class-wise foreground maps.
arXiv Detail & Related papers (2022-03-29T16:14:40Z) - CAD: Co-Adapting Discriminative Features for Improved Few-Shot
Classification [11.894289991529496]
Few-shot classification is a challenging problem that aims to learn a model that can adapt to unseen classes given a few labeled samples.
Recent approaches pre-train a feature extractor, and then fine-tune for episodic meta-learning.
We propose a strategy to cross-attend and re-weight discriminative features for few-shot classification.
arXiv Detail & Related papers (2022-03-25T06:14:51Z) - Learning What Not to Segment: A New Perspective on Few-Shot Segmentation [63.910211095033596]
Recently few-shot segmentation (FSS) has been extensively developed.
This paper proposes a fresh and straightforward insight to alleviate the problem.
In light of the unique nature of the proposed approach, we also extend it to a more realistic but challenging setting.
arXiv Detail & Related papers (2022-03-15T03:08:27Z) - BriNet: Towards Bridging the Intra-class and Inter-class Gaps in
One-Shot Segmentation [84.2925550033094]
Few-shot segmentation focuses on the generalization of models to segment unseen object instances with limited training samples.
We propose a framework, BriNet, to bridge the gaps between the extracted features of the query and support images.
The effectiveness of our framework is demonstrated by experimental results, which outperforms other competitive methods.
arXiv Detail & Related papers (2020-08-14T07:45:50Z) - Mining Cross-Image Semantics for Weakly Supervised Semantic Segmentation [128.03739769844736]
Two neural co-attentions are incorporated into the classifier to capture cross-image semantic similarities and differences.
In addition to boosting object pattern learning, the co-attention can leverage context from other related images to improve localization map inference.
Our algorithm sets new state-of-the-arts on all these settings, demonstrating well its efficacy and generalizability.
arXiv Detail & Related papers (2020-07-03T21:53:46Z) - A Few-Shot Sequential Approach for Object Counting [63.82757025821265]
We introduce a class attention mechanism that sequentially attends to objects in the image and extracts their relevant features.
The proposed technique is trained on point-level annotations and uses a novel loss function that disentangles class-dependent and class-agnostic aspects of the model.
We present our results on a variety of object-counting/detection datasets, including FSOD and MS COCO.
arXiv Detail & Related papers (2020-07-03T18:23:39Z) - Improving Few-shot Learning by Spatially-aware Matching and
CrossTransformer [116.46533207849619]
We study the impact of scale and location mismatch in the few-shot learning scenario.
We propose a novel Spatially-aware Matching scheme to effectively perform matching across multiple scales and locations.
arXiv Detail & Related papers (2020-01-06T14:10:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.