Impact of base dataset design on few-shot image classification
- URL: http://arxiv.org/abs/2007.08872v1
- Date: Fri, 17 Jul 2020 09:58:50 GMT
- Title: Impact of base dataset design on few-shot image classification
- Authors: Othman Sbai, Camille Couprie and Mathieu Aubry
- Abstract summary: We systematically study the effect of variations in the training data by evaluating deep features trained on different image sets in a few-shot classification setting.
We show how the base dataset design can improve performance in few-shot classification more drastically than replacing a simple baseline by an advanced state of the art algorithm.
- Score: 33.31817928613412
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The quality and generality of deep image features is crucially determined by
the data they have been trained on, but little is known about this often
overlooked effect. In this paper, we systematically study the effect of
variations in the training data by evaluating deep features trained on
different image sets in a few-shot classification setting. The experimental
protocol we define allows to explore key practical questions. What is the
influence of the similarity between base and test classes? Given a fixed
annotation budget, what is the optimal trade-off between the number of images
per class and the number of classes? Given a fixed dataset, can features be
improved by splitting or combining different classes? Should simple or diverse
classes be annotated? In a wide range of experiments, we provide clear answers
to these questions on the miniImageNet, ImageNet and CUB-200 benchmarks. We
also show how the base dataset design can improve performance in few-shot
classification more drastically than replacing a simple baseline by an advanced
state of the art algorithm.
Related papers
- Federated Learning Over Images: Vertical Decompositions and Pre-Trained
Backbones Are Difficult to Beat [17.30751773894676]
We evaluate a number of algorithms for learning in a federated environment.
We consider whether learning over data sets that do not have diverse sets of images affects the results.
We find that vertically decomposing a neural network seems to give the best results.
arXiv Detail & Related papers (2023-09-06T02:09:14Z) - Mixture of Self-Supervised Learning [2.191505742658975]
Self-supervised learning works by using a pretext task which will be trained on the model before being applied to a specific task.
Previous studies have only used one type of transformation as a pretext task.
This raises the question of how it affects if more than one pretext task is used and to use a gating network to combine all pretext tasks.
arXiv Detail & Related papers (2023-07-27T14:38:32Z) - Bi-directional Feature Reconstruction Network for Fine-Grained Few-Shot
Image Classification [61.411869453639845]
We introduce a bi-reconstruction mechanism that can simultaneously accommodate for inter-class and intra-class variations.
This design effectively helps the model to explore more subtle and discriminative features.
Experimental results on three widely used fine-grained image classification datasets consistently show considerable improvements.
arXiv Detail & Related papers (2022-11-30T16:55:14Z) - Rectifying the Shortcut Learning of Background: Shared Object
Concentration for Few-Shot Image Recognition [101.59989523028264]
Few-Shot image classification aims to utilize pretrained knowledge learned from a large-scale dataset to tackle a series of downstream classification tasks.
We propose COSOC, a novel Few-Shot Learning framework, to automatically figure out foreground objects at both pretraining and evaluation stage.
arXiv Detail & Related papers (2021-07-16T07:46:41Z) - Revisiting Deep Local Descriptor for Improved Few-Shot Classification [56.74552164206737]
We show how one can improve the quality of embeddings by leveraging textbfDense textbfClassification and textbfAttentive textbfPooling.
We suggest to pool feature maps by applying attentive pooling instead of the widely used global average pooling (GAP) to prepare embeddings for few-shot classification.
arXiv Detail & Related papers (2021-03-30T00:48:28Z) - Learning to Focus: Cascaded Feature Matching Network for Few-shot Image
Recognition [38.49419948988415]
Deep networks can learn to accurately recognize objects of a category by training on a large number of images.
A meta-learning challenge known as a low-shot image recognition task comes when only a few images with annotations are available for learning a recognition model for one category.
Our method, called Cascaded Feature Matching Network (CFMN), is proposed to solve this problem.
Experiments for few-shot learning on two standard datasets, emphminiImageNet and Omniglot, have confirmed the effectiveness of our method.
arXiv Detail & Related papers (2021-01-13T11:37:28Z) - Dataset Bias in Few-shot Image Recognition [57.25445414402398]
We first investigate the impact of transferable capabilities learned from base categories.
Second, we investigate performance differences on different datasets from dataset structures and different few-shot learning methods.
arXiv Detail & Related papers (2020-08-18T14:46:23Z) - SCAN: Learning to Classify Images without Labels [73.69513783788622]
We advocate a two-step approach where feature learning and clustering are decoupled.
A self-supervised task from representation learning is employed to obtain semantically meaningful features.
We obtain promising results on ImageNet, and outperform several semi-supervised learning methods in the low-data regime.
arXiv Detail & Related papers (2020-05-25T18:12:33Z) - One-Shot Image Classification by Learning to Restore Prototypes [11.448423413463916]
One-shot image classification aims to train image classifiers over the dataset with only one image per category.
For one-shot learning, the existing metric learning approaches would suffer poor performance because the single training image may not be representative of the class.
We propose a simple yet effective regression model, denoted by RestoreNet, which learns a class transformation on the image feature to move the image closer to the class center in the feature space.
arXiv Detail & Related papers (2020-05-04T02:11:30Z) - Learning to Compare Relation: Semantic Alignment for Few-Shot Learning [48.463122399494175]
We present a novel semantic alignment model to compare relations, which is robust to content misalignment.
We conduct extensive experiments on several few-shot learning datasets.
arXiv Detail & Related papers (2020-02-29T08:37:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.