Graph-based Interpolation of Feature Vectors for Accurate Few-Shot
Classification
- URL: http://arxiv.org/abs/2001.09849v4
- Date: Thu, 28 Jan 2021 07:56:12 GMT
- Title: Graph-based Interpolation of Feature Vectors for Accurate Few-Shot
Classification
- Authors: Yuqing Hu, Vincent Gripon, St\'ephane Pateux
- Abstract summary: In few-shot classification, the aim is to learn models able to discriminate classes using only a small number of labeled examples.
We propose a new method that relies on graphs only to interpolate feature vectors instead, resulting in a transductive learning setting.
- Score: 2.922007656878633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In few-shot classification, the aim is to learn models able to discriminate
classes using only a small number of labeled examples. In this context, works
have proposed to introduce Graph Neural Networks (GNNs) aiming at exploiting
the information contained in other samples treated concurrently, what is
commonly referred to as the transductive setting in the literature. These GNNs
are trained all together with a backbone feature extractor. In this paper, we
propose a new method that relies on graphs only to interpolate feature vectors
instead, resulting in a transductive learning setting with no additional
parameters to train. Our proposed method thus exploits two levels of
information: a) transfer features obtained on generic datasets, b) transductive
information obtained from other samples to be classified. Using standard
few-shot vision classification datasets, we demonstrate its ability to bring
significant gains compared to other works.
Related papers
- Transductive Linear Probing: A Novel Framework for Few-Shot Node
Classification [56.17097897754628]
We show that transductive linear probing with self-supervised graph contrastive pretraining can outperform the state-of-the-art fully supervised meta-learning based methods under the same protocol.
We hope this work can shed new light on few-shot node classification problems and foster future research on learning from scarcely labeled instances on graphs.
arXiv Detail & Related papers (2022-12-11T21:10:34Z) - Mutual Information Learned Classifiers: an Information-theoretic
Viewpoint of Training Deep Learning Classification Systems [9.660129425150926]
Cross entropy loss can easily lead us to find models which demonstrate severe overfitting behavior.
In this paper, we prove that the existing cross entropy loss minimization for training DNN classifiers essentially learns the conditional entropy of the underlying data distribution.
We propose a mutual information learning framework where we train DNN classifiers via learning the mutual information between the label and input.
arXiv Detail & Related papers (2022-10-03T15:09:19Z) - A Simple Yet Effective Pretraining Strategy for Graph Few-shot Learning [38.66690010054665]
We propose a simple transductive fine-tuning based framework as a new paradigm for graph few-shot learning.
For pretraining, we propose a supervised contrastive learning framework with data augmentation strategies specific for few-shot node classification.
arXiv Detail & Related papers (2022-03-29T22:30:00Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - From Canonical Correlation Analysis to Self-supervised Graph Neural
Networks [99.44881722969046]
We introduce a conceptually simple yet effective model for self-supervised representation learning with graph data.
We optimize an innovative feature-level objective inspired by classical Canonical Correlation Analysis.
Our method performs competitively on seven public graph datasets.
arXiv Detail & Related papers (2021-06-23T15:55:47Z) - ECKPN: Explicit Class Knowledge Propagation Network for Transductive
Few-shot Learning [53.09923823663554]
Class-level knowledge can be easily learned by humans from just a handful of samples.
We propose an Explicit Class Knowledge Propagation Network (ECKPN) to address this problem.
We conduct extensive experiments on four few-shot classification benchmarks, and the experimental results show that the proposed ECKPN significantly outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-06-16T02:29:43Z) - Few-Shot Object Detection via Knowledge Transfer [21.3564383157159]
Conventional methods for object detection usually require substantial amounts of training data and annotated bounding boxes.
In this paper, we introduce a few-shot object detection via knowledge transfer, which aims to detect objects from a few training examples.
arXiv Detail & Related papers (2020-08-28T06:35:27Z) - Geometric graphs from data to aid classification tasks with graph
convolutional networks [0.0]
We show that, even if additional relational information is not available in the data set, one can improve classification by constructing geometric graphs from the features themselves.
The improvement in classification accuracy is maximized by graphs that capture sample similarity with relatively low edge density.
arXiv Detail & Related papers (2020-05-08T15:00:45Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.