Learning to Select Base Classes for Few-shot Classification
- URL: http://arxiv.org/abs/2004.00315v1
- Date: Wed, 1 Apr 2020 09:55:18 GMT
- Title: Learning to Select Base Classes for Few-shot Classification
- Authors: Linjun Zhou, Peng Cui, Xu Jia, Shiqiang Yang, Qi Tian
- Abstract summary: We use the Similarity Ratio as an indicator for the generalization performance of a few-shot model.
We then formulate the base class selection problem as a submodular optimization problem over Similarity Ratio.
- Score: 96.92372639495551
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot learning has attracted intensive research attention in recent years.
Many methods have been proposed to generalize a model learned from provided
base classes to novel classes, but no previous work studies how to select base
classes, or even whether different base classes will result in different
generalization performance of the learned model. In this paper, we utilize a
simple yet effective measure, the Similarity Ratio, as an indicator for the
generalization performance of a few-shot model. We then formulate the base
class selection problem as a submodular optimization problem over Similarity
Ratio. We further provide theoretical analysis on the optimization lower bound
of different optimization methods, which could be used to identify the most
appropriate algorithm for different experimental settings. The extensive
experiments on ImageNet, Caltech256 and CUB-200-2011 demonstrate that our
proposed method is effective in selecting a better base dataset.
Related papers
- An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Achieving More with Less: A Tensor-Optimization-Powered Ensemble Method [53.170053108447455]
Ensemble learning is a method that leverages weak learners to produce a strong learner.
We design a smooth and convex objective function that leverages the concept of margin, making the strong learner more discriminative.
We then compare our algorithm with random forests of ten times the size and other classical methods across numerous datasets.
arXiv Detail & Related papers (2024-08-06T03:42:38Z) - An Adaptive Cost-Sensitive Learning and Recursive Denoising Framework for Imbalanced SVM Classification [12.986535715303331]
Category imbalance is one of the most popular and important issues in the domain of classification.
Emotion classification model trained on imbalanced datasets easily leads to unreliable prediction.
arXiv Detail & Related papers (2024-03-13T09:43:14Z) - Class-Incremental Learning with Strong Pre-trained Models [97.84755144148535]
Class-incremental learning (CIL) has been widely studied under the setting of starting from a small number of classes (base classes)
We explore an understudied real-world setting of CIL that starts with a strong model pre-trained on a large number of base classes.
Our proposed method is robust and generalizes to all analyzed CIL settings.
arXiv Detail & Related papers (2022-04-07T17:58:07Z) - EASY: Ensemble Augmented-Shot Y-shaped Learning: State-Of-The-Art
Few-Shot Classification with Simple Ingredients [2.0935101589828244]
Few-shot learning aims at leveraging knowledge learned by one or more deep learning models, in order to obtain good classification performance on new problems.
We propose a simple methodology, that reaches or even beats state of the art performance on multiple standardized benchmarks of the field.
arXiv Detail & Related papers (2022-01-24T14:08:23Z) - Partial Is Better Than All: Revisiting Fine-tuning Strategy for Few-shot
Learning [76.98364915566292]
A common practice is to train a model on the base set first and then transfer to novel classes through fine-tuning.
We propose to transfer partial knowledge by freezing or fine-tuning particular layer(s) in the base model.
We conduct extensive experiments on CUB and mini-ImageNet to demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2021-02-08T03:27:05Z) - Ensemble Learning Based Classification Algorithm Recommendation [8.94752302607367]
This paper proposes an ensemble learning-based algorithm recommendation method.
To evaluate the proposed recommendation method, extensive experiments with 13 well-known candidate classification algorithms and five different kinds of meta-features are conducted on 1090 benchmark classification problems.
arXiv Detail & Related papers (2021-01-15T07:14:51Z) - Few-shot Classification via Adaptive Attention [93.06105498633492]
We propose a novel few-shot learning method via optimizing and fast adapting the query sample representation based on very few reference samples.
As demonstrated experimentally, the proposed model achieves state-of-the-art classification results on various benchmark few-shot classification and fine-grained recognition datasets.
arXiv Detail & Related papers (2020-08-06T05:52:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.