Finding Significant Features for Few-Shot Learning using Dimensionality
Reduction
- URL: http://arxiv.org/abs/2107.06992v1
- Date: Tue, 6 Jul 2021 16:36:57 GMT
- Title: Finding Significant Features for Few-Shot Learning using Dimensionality
Reduction
- Authors: Mauricio Mendez-Ruiz, Ivan Garcia Jorge Gonzalez-Zapata, Gilberto
Ochoa-Ruiz, Andres Mendez-Vazquez
- Abstract summary: This module helps to improve the accuracy performance by allowing the similarity function, given by the metric learning method, to have more discriminative features for the classification.
Our method outperforms the metric learning baselines in the miniImageNet dataset by around 2% in accuracy performance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-shot learning is a relatively new technique that specializes in problems
where we have little amounts of data. The goal of these methods is to classify
categories that have not been seen before with just a handful of samples.
Recent approaches, such as metric learning, adopt the meta-learning strategy in
which we have episodic tasks conformed by support (training) data and query
(test) data. Metric learning methods have demonstrated that simple models can
achieve good performance by learning a similarity function to compare the
support and the query data. However, the feature space learned by a given
metric learning approach may not exploit the information given by a specific
few-shot task. In this work, we explore the use of dimension reduction
techniques as a way to find task-significant features helping to make better
predictions. We measure the performance of the reduced features by assigning a
score based on the intra-class and inter-class distance, and selecting a
feature reduction method in which instances of different classes are far away
and instances of the same class are close. This module helps to improve the
accuracy performance by allowing the similarity function, given by the metric
learning method, to have more discriminative features for the classification.
Our method outperforms the metric learning baselines in the miniImageNet
dataset by around 2% in accuracy performance.
Related papers
- SuSana Distancia is all you need: Enforcing class separability in metric
learning via two novel distance-based loss functions for few-shot image
classification [0.9236074230806579]
We propose two loss functions which consider the importance of the embedding vectors by looking at the intra-class and inter-class distance between the few data.
Our results show a significant improvement in accuracy in the miniImagenNet benchmark compared to other metric-based few-shot learning methods by a margin of 2%.
arXiv Detail & Related papers (2023-05-15T23:12:09Z) - Learning Context-aware Classifier for Semantic Segmentation [88.88198210948426]
In this paper, contextual hints are exploited via learning a context-aware classifier.
Our method is model-agnostic and can be easily applied to generic segmentation models.
With only negligible additional parameters and +2% inference time, decent performance gain has been achieved on both small and large models.
arXiv Detail & Related papers (2023-03-21T07:00:35Z) - CAD: Co-Adapting Discriminative Features for Improved Few-Shot
Classification [11.894289991529496]
Few-shot classification is a challenging problem that aims to learn a model that can adapt to unseen classes given a few labeled samples.
Recent approaches pre-train a feature extractor, and then fine-tune for episodic meta-learning.
We propose a strategy to cross-attend and re-weight discriminative features for few-shot classification.
arXiv Detail & Related papers (2022-03-25T06:14:51Z) - Self-Supervised Metric Learning in Multi-View Data: A Downstream Task
Perspective [2.01243755755303]
We study how self-supervised metric learning can benefit downstream tasks in the context of multi-view data.
We show that the target distance of metric learning satisfies several desired properties for the downstream tasks.
Our analysis characterizes the improvement by self-supervised metric learning on four commonly used downstream tasks.
arXiv Detail & Related papers (2021-06-14T02:34:33Z) - How Fine-Tuning Allows for Effective Meta-Learning [50.17896588738377]
We present a theoretical framework for analyzing representations derived from a MAML-like algorithm.
We provide risk bounds on the best predictor found by fine-tuning via gradient descent, demonstrating that the algorithm can provably leverage the shared structure.
This separation result underscores the benefit of fine-tuning-based methods, such as MAML, over methods with "frozen representation" objectives in few-shot learning.
arXiv Detail & Related papers (2021-05-05T17:56:00Z) - MetricOpt: Learning to Optimize Black-Box Evaluation Metrics [21.608384691401238]
We study the problem of optimizing arbitrary non-differentiable task evaluation metrics such as misclassification rate and recall.
Our method, named MetricOpt, operates in a black-box setting where the computational details of the target metric are unknown.
We achieve this by learning a differentiable value function, which maps compact task-specific model parameters to metric observations.
arXiv Detail & Related papers (2021-04-21T16:50:01Z) - Conditional Meta-Learning of Linear Representations [57.90025697492041]
Standard meta-learning for representation learning aims to find a common representation to be shared across multiple tasks.
In this work we overcome this issue by inferring a conditioning function, mapping the tasks' side information into a representation tailored to the task at hand.
We propose a meta-algorithm capable of leveraging this advantage in practice.
arXiv Detail & Related papers (2021-03-30T12:02:14Z) - Meta-Generating Deep Attentive Metric for Few-shot Classification [53.07108067253006]
We present a novel deep metric meta-generation method to generate a specific metric for a new few-shot learning task.
In this study, we structure the metric using a three-layer deep attentive network that is flexible enough to produce a discriminative metric for each task.
We gain surprisingly obvious performance improvement over state-of-the-art competitors, especially in the challenging cases.
arXiv Detail & Related papers (2020-12-03T02:07:43Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z) - Inverse Feature Learning: Feature learning based on Representation
Learning of Error [12.777440204911022]
This paper proposes inverse feature learning as a novel supervised feature learning technique that learns a set of high-level features for classification based on an error representation approach.
The proposed method results in significantly better performance compared to the state-of-the-art classification techniques for several popular data sets.
arXiv Detail & Related papers (2020-03-08T00:22:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.