SML: Semantic Meta-learning for Few-shot Semantic Segmentation
- URL: http://arxiv.org/abs/2009.06680v1
- Date: Mon, 14 Sep 2020 18:26:46 GMT
- Title: SML: Semantic Meta-learning for Few-shot Semantic Segmentation
- Authors: Ayyappa Kumar Pambala, Titir Dutta, Soma Biswas
- Abstract summary: We propose a novel meta-learning framework, Semantic Meta-Learning, which incorporates class-level semantic descriptions in the generated prototypes for this problem.
In addition, we propose to use the well established technique, ridge regression, to not only bring in the class-level semantic information, but also to effectively utilise the information available from multiple images present in the training data for prototype computation.
- Score: 27.773396307292497
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The significant amount of training data required for training Convolutional
Neural Networks has become a bottleneck for applications like semantic
segmentation. Few-shot semantic segmentation algorithms address this problem,
with an aim to achieve good performance in the low-data regime, with few
annotated training images. Recently, approaches based on class-prototypes
computed from available training data have achieved immense success for this
task. In this work, we propose a novel meta-learning framework, Semantic
Meta-Learning (SML) which incorporates class level semantic descriptions in the
generated prototypes for this problem. In addition, we propose to use the well
established technique, ridge regression, to not only bring in the class-level
semantic information, but also to effectively utilise the information available
from multiple images present in the training data for prototype computation.
This has a simple closed-form solution, and thus can be implemented easily and
efficiently. Extensive experiments on the benchmark PASCAL-5i dataset under
different experimental settings show the effectiveness of the proposed
framework.
Related papers
- Semantic Meta-Split Learning: A TinyML Scheme for Few-Shot Wireless Image Classification [50.28867343337997]
This work presents a TinyML-based semantic communication framework for few-shot wireless image classification.
We exploit split-learning to limit the computations performed by the end-users while ensuring privacy-preserving.
meta-learning overcomes data availability concerns and speeds up training by utilizing similarly trained tasks.
arXiv Detail & Related papers (2024-09-03T05:56:55Z) - Unsupervised Pre-training with Language-Vision Prompts for Low-Data Instance Segmentation [105.23631749213729]
We propose a novel method for unsupervised pre-training in low-data regimes.
Inspired by the recently successful prompting technique, we introduce a new method, Unsupervised Pre-training with Language-Vision Prompts.
We show that our method can converge faster and perform better than CNN-based models in low-data regimes.
arXiv Detail & Related papers (2024-05-22T06:48:43Z) - A Simple-but-effective Baseline for Training-free Class-Agnostic
Counting [30.792198686654075]
Class-Agnostic Counting (CAC) seeks to accurately count objects in a given image with only a few reference examples.
Recent efforts have shown that it's possible to accomplish this without training by utilizing pre-existing foundation models.
We present a training-free solution that effectively bridges this performance gap, serving as a strong baseline.
arXiv Detail & Related papers (2024-03-03T07:19:50Z) - Improving Meta-learning for Low-resource Text Classification and
Generation via Memory Imitation [87.98063273826702]
We propose a memory imitation meta-learning (MemIML) method that enhances the model's reliance on support sets for task adaptation.
A theoretical analysis is provided to prove the effectiveness of our method.
arXiv Detail & Related papers (2022-03-22T12:41:55Z) - Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain,
Active and Continual Few-Shot Learning [41.07029317930986]
We propose a variance-sensitive class of models that operates in a low-label regime.
The first method, Simple CNAPS, employs a hierarchically regularized Mahalanobis-distance based classifier.
We further extend this approach to a transductive learning setting, proposing Transductive CNAPS.
arXiv Detail & Related papers (2022-01-13T18:59:02Z) - A Representation Learning Perspective on the Importance of
Train-Validation Splitting in Meta-Learning [14.720411598827365]
splitting data from each task into train and validation sets during meta-training.
We argue that the train-validation split encourages the learned representation to be low-rank without compromising on expressivity.
Since sample efficiency benefits from low-rankness, the splitting strategy will require very few samples to solve unseen test tasks.
arXiv Detail & Related papers (2021-06-29T17:59:33Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z) - Pre-training Text Representations as Meta Learning [113.3361289756749]
We introduce a learning algorithm which directly optimize model's ability to learn text representations for effective learning of downstream tasks.
We show that there is an intrinsic connection between multi-task pre-training and model-agnostic meta-learning with a sequence of meta-train steps.
arXiv Detail & Related papers (2020-04-12T09:05:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.