Dataset Bias in Few-shot Image Recognition
- URL: http://arxiv.org/abs/2008.07960v3
- Date: Tue, 16 Mar 2021 03:23:18 GMT
- Title: Dataset Bias in Few-shot Image Recognition
- Authors: Shuqiang Jiang, Yaohui Zhu, Chenlong Liu, Xinhang Song, Xiangyang Li,
and Weiqing Min
- Abstract summary: We first investigate the impact of transferable capabilities learned from base categories.
Second, we investigate performance differences on different datasets from dataset structures and different few-shot learning methods.
- Score: 57.25445414402398
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of few-shot image recognition (FSIR) is to identify novel categories
with a small number of annotated samples by exploiting transferable knowledge
from training data (base categories). Most current studies assume that the
transferable knowledge can be well used to identify novel categories. However,
such transferable capability may be impacted by the dataset bias, and this
problem has rarely been investigated before. Besides, most of few-shot learning
methods are biased to different datasets, which is also an important issue that
needs to be investigated deeply. In this paper, we first investigate the impact
of transferable capabilities learned from base categories. Specifically, we use
the relevance to measure relationships between base categories and novel
categories. Distributions of base categories are depicted via the instance
density and category diversity. The FSIR model learns better transferable
knowledge from relevant training data. In the relevant data, dense instances or
diverse categories can further enrich the learned knowledge. Experimental
results on different sub-datasets of ImagNet demonstrate category relevance,
instance density and category diversity can depict transferable bias from base
categories. Second, we investigate performance differences on different
datasets from dataset structures and different few-shot learning methods.
Specifically, we introduce image complexity, intra-concept visual consistency,
and inter-concept visual similarity to quantify characteristics of dataset
structures. We use these quantitative characteristics and four few-shot
learning methods to analyze performance differences on five different datasets.
Based on the experimental analysis, some insightful observations are obtained
from the perspective of both dataset structures and few-shot learning methods.
We hope these observations are useful to guide future FSIR research.
Related papers
- Preview-based Category Contrastive Learning for Knowledge Distillation [53.551002781828146]
We propose a novel preview-based category contrastive learning method for knowledge distillation (PCKD)
It first distills the structural knowledge of both instance-level feature correspondence and the relation between instance features and category centers.
It can explicitly optimize the category representation and explore the distinct correlation between representations of instances and categories.
arXiv Detail & Related papers (2024-10-18T03:31:00Z) - Learning Exemplar Representations in Single-Trial EEG Category Decoding [0.0]
We show that when trials relating to a single object are allowed to appear in both the training and testing sets, almost any classification algorithm is capable of learning the representation of an object given only category labels.
We demonstrate the ability of both simple classification algorithms, and sophisticated deep learning models, to learn object representations given only category labels.
arXiv Detail & Related papers (2024-05-31T18:51:10Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z) - PK-GCN: Prior Knowledge Assisted Image Classification using Graph
Convolution Networks [3.4129083593356433]
Similarity between classes can influence the performance of classification.
We propose a method that incorporates class similarity knowledge into convolutional neural networks models.
Experimental results show that our model can improve classification accuracy, especially when the amount of available data is small.
arXiv Detail & Related papers (2020-09-24T18:31:35Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z) - Impact of base dataset design on few-shot image classification [33.31817928613412]
We systematically study the effect of variations in the training data by evaluating deep features trained on different image sets in a few-shot classification setting.
We show how the base dataset design can improve performance in few-shot classification more drastically than replacing a simple baseline by an advanced state of the art algorithm.
arXiv Detail & Related papers (2020-07-17T09:58:50Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z) - Learning to Compare Relation: Semantic Alignment for Few-Shot Learning [48.463122399494175]
We present a novel semantic alignment model to compare relations, which is robust to content misalignment.
We conduct extensive experiments on several few-shot learning datasets.
arXiv Detail & Related papers (2020-02-29T08:37:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.