Few-shot Learning with LSSVM Base Learner and Transductive Modules
- URL: http://arxiv.org/abs/2009.05786v1
- Date: Sat, 12 Sep 2020 13:16:55 GMT
- Title: Few-shot Learning with LSSVM Base Learner and Transductive Modules
- Authors: Haoqing Wang, Zhi-Hong Deng
- Abstract summary: We introduce multi-class least squares support vector machine as our base learner which obtains better generation than existing ones with less computational overhead.
We also propose two simple and effective transductive modules which modify the support set using the query samples.
Our model, denoted as FSLSTM, achieves state-of-the-art performance on miniImageNet and CIFAR-FS few-shot learning benchmarks.
- Score: 20.323443723115275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of meta-learning approaches for few-shot learning generally
depends on three aspects: features suitable for comparison, the classifier (
base learner ) suitable for low-data scenarios, and valuable information from
the samples to classify. In this work, we make improvements for the last two
aspects: 1) although there are many effective base learners, there is a
trade-off between generalization performance and computational overhead, so we
introduce multi-class least squares support vector machine as our base learner
which obtains better generation than existing ones with less computational
overhead; 2) further, in order to utilize the information from the query
samples, we propose two simple and effective transductive modules which modify
the support set using the query samples, i.e., adjusting the support samples
basing on the attention mechanism and adding the prototypes of the query set
with pseudo labels to the support set as the pseudo support samples. These two
modules significantly improve the few-shot classification accuracy, especially
for the difficult 1-shot setting. Our model, denoted as FSLSTM (Few-Shot
learning with LSsvm base learner and Transductive Modules), achieves
state-of-the-art performance on miniImageNet and CIFAR-FS few-shot learning
benchmarks.
Related papers
- Dual Adaptive Representation Alignment for Cross-domain Few-shot
Learning [58.837146720228226]
Few-shot learning aims to recognize novel queries with limited support samples by learning from base knowledge.
Recent progress in this setting assumes that the base knowledge and novel query samples are distributed in the same domains.
We propose to address the cross-domain few-shot learning problem where only extremely few samples are available in target domains.
arXiv Detail & Related papers (2023-06-18T09:52:16Z) - USER: Unified Semantic Enhancement with Momentum Contrast for Image-Text
Retrieval [115.28586222748478]
Image-Text Retrieval (ITR) aims at searching for the target instances that are semantically relevant to the given query from the other modality.
Existing approaches typically suffer from two major limitations.
arXiv Detail & Related papers (2023-01-17T12:42:58Z) - CAD: Co-Adapting Discriminative Features for Improved Few-Shot
Classification [11.894289991529496]
Few-shot classification is a challenging problem that aims to learn a model that can adapt to unseen classes given a few labeled samples.
Recent approaches pre-train a feature extractor, and then fine-tune for episodic meta-learning.
We propose a strategy to cross-attend and re-weight discriminative features for few-shot classification.
arXiv Detail & Related papers (2022-03-25T06:14:51Z) - Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain,
Active and Continual Few-Shot Learning [41.07029317930986]
We propose a variance-sensitive class of models that operates in a low-label regime.
The first method, Simple CNAPS, employs a hierarchically regularized Mahalanobis-distance based classifier.
We further extend this approach to a transductive learning setting, proposing Transductive CNAPS.
arXiv Detail & Related papers (2022-01-13T18:59:02Z) - Contrastive Prototype Learning with Augmented Embeddings for Few-Shot
Learning [58.2091760793799]
We propose a novel contrastive prototype learning with augmented embeddings (CPLAE) model.
With a class prototype as an anchor, CPL aims to pull the query samples of the same class closer and those of different classes further away.
Extensive experiments on several benchmarks demonstrate that our proposed CPLAE achieves new state-of-the-art.
arXiv Detail & Related papers (2021-01-23T13:22:44Z) - Few-Shot Named Entity Recognition: A Comprehensive Study [92.40991050806544]
We investigate three schemes to improve the model generalization ability for few-shot settings.
We perform empirical comparisons on 10 public NER datasets with various proportions of labeled data.
We create new state-of-the-art results on both few-shot and training-free settings.
arXiv Detail & Related papers (2020-12-29T23:43:16Z) - Revisiting Unsupervised Meta-Learning: Amplifying or Compensating for
the Characteristics of Few-Shot Tasks [30.893785366366078]
We develop a practical approach towards few-shot image classification, where a visual recognition system is constructed with limited data.
We find that the base class set labels are not necessary, and discriminative embeddings could be meta-learned in an unsupervised manner.
Experiments on few-shot learning benchmarks verify our approaches outperform previous methods by a 4-10% performance gap.
arXiv Detail & Related papers (2020-11-30T10:08:35Z) - Multi-scale Adaptive Task Attention Network for Few-Shot Learning [5.861206243996454]
The goal of few-shot learning is to classify unseen categories with few labeled samples.
This paper proposes a novel Multi-scale Adaptive Task Attention Network (MATANet) for few-shot learning.
arXiv Detail & Related papers (2020-11-30T00:36:01Z) - Few-shot Classification via Adaptive Attention [93.06105498633492]
We propose a novel few-shot learning method via optimizing and fast adapting the query sample representation based on very few reference samples.
As demonstrated experimentally, the proposed model achieves state-of-the-art classification results on various benchmark few-shot classification and fine-grained recognition datasets.
arXiv Detail & Related papers (2020-08-06T05:52:59Z) - UniT: Unified Knowledge Transfer for Any-shot Object Detection and
Segmentation [52.487469544343305]
Methods for object detection and segmentation rely on large scale instance-level annotations for training.
We propose an intuitive and unified semi-supervised model that is applicable to a range of supervision.
arXiv Detail & Related papers (2020-06-12T22:45:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.