Looking back to lower-level information in few-shot learning
- URL: http://arxiv.org/abs/2005.13638v2
- Date: Thu, 16 Jul 2020 02:32:21 GMT
- Title: Looking back to lower-level information in few-shot learning
- Authors: Zhongjie Yu and Sebastian Raschka
- Abstract summary: We propose the utilization of lower-level, supporting information, namely the feature embeddings of the hidden neural network layers, to improve classification accuracy.
Our experiments on two popular few-shot learning datasets, miniImageNet and tieredImageNet, show that our method can utilize the lower-level information in the network to improve state-of-the-art classification performance.
- Score: 4.873362301533825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans are capable of learning new concepts from small numbers of examples.
In contrast, supervised deep learning models usually lack the ability to
extract reliable predictive rules from limited data scenarios when attempting
to classify new examples. This challenging scenario is commonly known as
few-shot learning. Few-shot learning has garnered increased attention in recent
years due to its significance for many real-world problems. Recently, new
methods relying on meta-learning paradigms combined with graph-based
structures, which model the relationship between examples, have shown promising
results on a variety of few-shot classification tasks. However, existing work
on few-shot learning is only focused on the feature embeddings produced by the
last layer of the neural network. In this work, we propose the utilization of
lower-level, supporting information, namely the feature embeddings of the
hidden neural network layers, to improve classifier accuracy. Based on a
graph-based meta-learning framework, we develop a method called Looking-Back,
where such lower-level information is used to construct additional graphs for
label propagation in limited data settings. Our experiments on two popular
few-shot learning datasets, miniImageNet and tieredImageNet, show that our
method can utilize the lower-level information in the network to improve
state-of-the-art classification performance.
Related papers
- Two-level Graph Network for Few-Shot Class-Incremental Learning [7.815043173207539]
Few-shot class-incremental learning (FSCIL) aims to design machine learning algorithms that can continually learn new concepts from a few data points.
Existing FSCIL methods ignore the semantic relationships between sample-level and class-level.
In this paper, we designed a two-level graph network for FSCIL named Sample-level and Class-level Graph Neural Network (SCGN)
arXiv Detail & Related papers (2023-03-24T08:58:08Z) - Semantic Enhanced Knowledge Graph for Large-Scale Zero-Shot Learning [74.6485604326913]
We provide a new semantic enhanced knowledge graph that contains both expert knowledge and categories semantic correlation.
To propagate information on the knowledge graph, we propose a novel Residual Graph Convolutional Network (ResGCN)
Experiments conducted on the widely used large-scale ImageNet-21K dataset and AWA2 dataset show the effectiveness of our method.
arXiv Detail & Related papers (2022-12-26T13:18:36Z) - A Simple Yet Effective Pretraining Strategy for Graph Few-shot Learning [38.66690010054665]
We propose a simple transductive fine-tuning based framework as a new paradigm for graph few-shot learning.
For pretraining, we propose a supervised contrastive learning framework with data augmentation strategies specific for few-shot node classification.
arXiv Detail & Related papers (2022-03-29T22:30:00Z) - Budget-aware Few-shot Learning via Graph Convolutional Network [56.41899553037247]
This paper tackles the problem of few-shot learning, which aims to learn new visual concepts from a few examples.
A common problem setting in few-shot classification assumes random sampling strategy in acquiring data labels.
We introduce a new budget-aware few-shot learning problem that aims to learn novel object categories.
arXiv Detail & Related papers (2022-01-07T02:46:35Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - Model-Agnostic Graph Regularization for Few-Shot Learning [60.64531995451357]
We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
arXiv Detail & Related papers (2021-02-14T05:28:13Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z) - Learning from Few Samples: A Survey [1.4146420810689422]
We study the existing few-shot meta-learning techniques in the computer vision domain based on their method and evaluation metrics.
We provide a taxonomy for the techniques and categorize them as data-augmentation, embedding, optimization and semantics based learning for few-shot, one-shot and zero-shot settings.
arXiv Detail & Related papers (2020-07-30T14:28:57Z) - Complementing Representation Deficiency in Few-shot Image
Classification: A Meta-Learning Approach [27.350615059290348]
We propose a meta-learning approach with complemented representations network (MCRNet) for few-shot image classification.
In particular, we embed a latent space, where latent codes are reconstructed with extra representation information to complement the representation deficiency.
Our end-to-end framework achieves the state-of-the-art performance in image classification on three standard few-shot learning datasets.
arXiv Detail & Related papers (2020-07-21T13:25:54Z) - Adversarially-Trained Deep Nets Transfer Better: Illustration on Image
Classification [53.735029033681435]
Transfer learning is a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models.
arXiv Detail & Related papers (2020-07-11T22:48:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.