Few and Fewer: Learning Better from Few Examples Using Fewer Base
Classes
- URL: http://arxiv.org/abs/2401.15834v1
- Date: Mon, 29 Jan 2024 01:52:49 GMT
- Title: Few and Fewer: Learning Better from Few Examples Using Fewer Base
Classes
- Authors: Raphael Lafargue, Yassir Bendou, Bastien Pasdeloup, Jean-Philippe
Diguet, Ian Reid, Vincent Gripon and Jack Valmadre
- Abstract summary: Fine-tuning is ineffective for few-shot learning, since the target dataset contains only a handful of examples.
This paper investigates whether better features for the target dataset can be obtained by training on fewer base classes.
To our knowledge, this is the first demonstration that fine-tuning on a subset of carefully selected base classes can significantly improve few-shot learning.
- Score: 12.742052888217543
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When training data is scarce, it is common to make use of a feature extractor
that has been pre-trained on a large base dataset, either by fine-tuning its
parameters on the ``target'' dataset or by directly adopting its representation
as features for a simple classifier. Fine-tuning is ineffective for few-shot
learning, since the target dataset contains only a handful of examples.
However, directly adopting the features without fine-tuning relies on the base
and target distributions being similar enough that these features achieve
separability and generalization. This paper investigates whether better
features for the target dataset can be obtained by training on fewer base
classes, seeking to identify a more useful base dataset for a given task.We
consider cross-domain few-shot image classification in eight different domains
from Meta-Dataset and entertain multiple real-world settings (domain-informed,
task-informed and uninformed) where progressively less detail is known about
the target task. To our knowledge, this is the first demonstration that
fine-tuning on a subset of carefully selected base classes can significantly
improve few-shot learning. Our contributions are simple and intuitive methods
that can be implemented in any few-shot solution. We also give insights into
the conditions in which these solutions are likely to provide a boost in
accuracy. We release the code to reproduce all experiments from this paper on
GitHub. https://github.com/RafLaf/Few-and-Fewer.git
Related papers
- Improve Meta-learning for Few-Shot Text Classification with All You Can Acquire from the Tasks [10.556477506959888]
Existing methods often encounter difficulties in drawing accurate class prototypes from support set samples.
Recent approaches attempt to incorporate external knowledge or pre-trained language models to augment data, but this requires additional resources.
We propose a novel solution by adequately leveraging the information within the task itself.
arXiv Detail & Related papers (2024-10-14T12:47:11Z) - Cross-Level Distillation and Feature Denoising for Cross-Domain Few-Shot
Classification [49.36348058247138]
We tackle the problem of cross-domain few-shot classification by making a small proportion of unlabeled images in the target domain accessible in the training stage.
We meticulously design a cross-level knowledge distillation method, which can strengthen the ability of the model to extract more discriminative features in the target dataset.
Our approach can surpass the previous state-of-the-art method, Dynamic-Distillation, by 5.44% on 1-shot and 1.37% on 5-shot classification tasks.
arXiv Detail & Related papers (2023-11-04T12:28:04Z) - Project and Probe: Sample-Efficient Domain Adaptation by Interpolating
Orthogonal Features [119.22672589020394]
We propose a lightweight, sample-efficient approach that learns a diverse set of features and adapts to a target distribution by interpolating these features.
Our experiments on four datasets, with multiple distribution shift settings for each, show that Pro$2$ improves performance by 5-15% when given limited target data.
arXiv Detail & Related papers (2023-02-10T18:58:03Z) - BaseTransformers: Attention over base data-points for One Shot Learning [6.708284461619745]
Few shot classification aims to learn to recognize novel categories using only limited samples per category.
Most current few shot methods use a base dataset rich in labeled examples to train an encoder that is used for obtaining representations of support instances for novel classes.
In this paper we propose to make use of the well-trained feature representations of the base dataset that are closest to each support instance to improve its representation during meta-test time.
arXiv Detail & Related papers (2022-10-05T18:00:24Z) - Lightweight Conditional Model Extrapolation for Streaming Data under
Class-Prior Shift [27.806085423595334]
We introduce LIMES, a new method for learning with non-stationary streaming data.
We learn a single set of model parameters from which a specific classifier for any specific data distribution is derived.
Experiments on a set of exemplary tasks using Twitter data show that LIMES achieves higher accuracy than alternative approaches.
arXiv Detail & Related papers (2022-06-10T15:19:52Z) - Finding Significant Features for Few-Shot Learning using Dimensionality
Reduction [0.0]
This module helps to improve the accuracy performance by allowing the similarity function, given by the metric learning method, to have more discriminative features for the classification.
Our method outperforms the metric learning baselines in the miniImageNet dataset by around 2% in accuracy performance.
arXiv Detail & Related papers (2021-07-06T16:36:57Z) - Partial Is Better Than All: Revisiting Fine-tuning Strategy for Few-shot
Learning [76.98364915566292]
A common practice is to train a model on the base set first and then transfer to novel classes through fine-tuning.
We propose to transfer partial knowledge by freezing or fine-tuning particular layer(s) in the base model.
We conduct extensive experiments on CUB and mini-ImageNet to demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2021-02-08T03:27:05Z) - Low-Resource Domain Adaptation for Compositional Task-Oriented Semantic
Parsing [85.35582118010608]
Task-oriented semantic parsing is a critical component of virtual assistants.
Recent advances in deep learning have enabled several approaches to successfully parse more complex queries.
We propose a novel method that outperforms a supervised neural model at a 10-fold data reduction.
arXiv Detail & Related papers (2020-10-07T17:47:53Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Selecting Relevant Features from a Multi-domain Representation for
Few-shot Classification [91.67977602992657]
We propose a new strategy based on feature selection, which is both simpler and more effective than previous feature adaptation approaches.
We show that a simple non-parametric classifier built on top of such features produces high accuracy and generalizes to domains never seen during training.
arXiv Detail & Related papers (2020-03-20T15:44:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.