A Unified Framework with Meta-dropout for Few-shot Learning
- URL: http://arxiv.org/abs/2210.06409v1
- Date: Wed, 12 Oct 2022 17:05:06 GMT
- Title: A Unified Framework with Meta-dropout for Few-shot Learning
- Authors: Shaobo Lin, Xingyu Zeng, Rui Zhao
- Abstract summary: In this paper, we utilize the idea of meta-learning to explain two very different streams of few-shot learning.
We propose a simple yet effective strategy named meta-dropout, which is applied to the transferable knowledge generalized from base categories to novel categories.
- Score: 25.55782263169028
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional training of deep neural networks usually requires a substantial
amount of data with expensive human annotations. In this paper, we utilize the
idea of meta-learning to explain two very different streams of few-shot
learning, i.e., the episodic meta-learning-based and pre-train finetune-based
few-shot learning, and form a unified meta-learning framework. In order to
improve the generalization power of our framework, we propose a simple yet
effective strategy named meta-dropout, which is applied to the transferable
knowledge generalized from base categories to novel categories. The proposed
strategy can effectively prevent neural units from co-adapting excessively in
the meta-training stage. Extensive experiments on the few-shot object detection
and few-shot image classification datasets, i.e., Pascal VOC, MS COCO, CUB, and
mini-ImageNet, validate the effectiveness of our method.
Related papers
- Towards Few-Annotation Learning in Computer Vision: Application to Image
Classification and Object Detection tasks [3.5353632767823506]
In this thesis, we develop theoretical, algorithmic and experimental contributions for Machine Learning with limited labels.
In a first contribution, we are interested in bridging the gap between theory and practice for popular Meta-Learning algorithms used in Few-Shot Classification.
To leverage unlabeled data when training object detectors based on the Transformer architecture, we propose both an unsupervised pretraining and a semi-supervised learning method.
arXiv Detail & Related papers (2023-11-08T18:50:04Z) - Unsupervised Meta-Learning via Few-shot Pseudo-supervised Contrastive
Learning [72.3506897990639]
We propose a simple yet effective unsupervised meta-learning framework, coined Pseudo-supervised Contrast (PsCo) for few-shot classification.
PsCo outperforms existing unsupervised meta-learning methods under various in-domain and cross-domain few-shot classification benchmarks.
arXiv Detail & Related papers (2023-03-02T06:10:13Z) - Trainable Class Prototypes for Few-Shot Learning [5.481942307939029]
We propose the trainable prototypes for distance measure instead of the artificial ones within the meta-training and task-training framework.
Also to avoid the disadvantages that the episodic meta-training brought, we adopt non-episodic meta-training based on self-supervised learning.
Our method achieves state-of-the-art performance in a variety of established few-shot tasks on the standard few-shot visual classification dataset.
arXiv Detail & Related papers (2021-06-21T04:19:56Z) - Partial Is Better Than All: Revisiting Fine-tuning Strategy for Few-shot
Learning [76.98364915566292]
A common practice is to train a model on the base set first and then transfer to novel classes through fine-tuning.
We propose to transfer partial knowledge by freezing or fine-tuning particular layer(s) in the base model.
We conduct extensive experiments on CUB and mini-ImageNet to demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2021-02-08T03:27:05Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Meta-Learning with Network Pruning [40.07436648243748]
We propose a network pruning based meta-learning approach for overfitting reduction via explicitly controlling the capacity of network.
We have implemented our approach on top of Reptile assembled with two network pruning routines: Dense-Sparse-Dense (DSD) and Iterative Hard Thresholding (IHT)
arXiv Detail & Related papers (2020-07-07T06:13:11Z) - One-Shot Object Detection without Fine-Tuning [62.39210447209698]
We introduce a two-stage model consisting of a first stage Matching-FCOS network and a second stage Structure-Aware Relation Module.
We also propose novel training strategies that effectively improve detection performance.
Our method exceeds the state-of-the-art one-shot performance consistently on multiple datasets.
arXiv Detail & Related papers (2020-05-08T01:59:23Z) - Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning [79.25478727351604]
We explore a simple process: meta-learning over a whole-classification pre-trained model on its evaluation metric.
We observe this simple method achieves competitive performance to state-of-the-art methods on standard benchmarks.
arXiv Detail & Related papers (2020-03-09T20:06:36Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.