Cross-Domain Few-Shot Learning with Meta Fine-Tuning
- URL: http://arxiv.org/abs/2005.10544v4
- Date: Tue, 25 Aug 2020 15:45:55 GMT
- Title: Cross-Domain Few-Shot Learning with Meta Fine-Tuning
- Authors: John Cai, Sheng Mei Shen
- Abstract summary: We tackle the new Cross-Domain Few-Shot Learning benchmark proposed by the CVPR 2020 Challenge.
We build upon state-of-the-art methods in domain adaptation and few-shot learning to create a system that can be trained to perform both tasks.
- Score: 8.062394790518297
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we tackle the new Cross-Domain Few-Shot Learning benchmark
proposed by the CVPR 2020 Challenge. To this end, we build upon
state-of-the-art methods in domain adaptation and few-shot learning to create a
system that can be trained to perform both tasks. Inspired by the need to
create models designed to be fine-tuned, we explore the integration of
transfer-learning (fine-tuning) with meta-learning algorithms, to train a
network that has specific layers that are designed to be adapted at a later
fine-tuning stage. To do so, we modify the episodic training process to include
a first-order MAML-based meta-learning algorithm, and use a Graph Neural
Network model as the subsequent meta-learning module. We find that our proposed
method helps to boost accuracy significantly, especially when combined with
data augmentation. In our final results, we combine the novel method with the
baseline method in a simple ensemble, and achieve an average accuracy of 73.78%
on the benchmark. This is a 6.51% improvement over existing benchmarks that
were trained solely on miniImagenet.
Related papers
- Learning to Learn with Indispensable Connections [6.040904021861969]
We propose a novel meta-learning method called Meta-LTH that includes indispensible (necessary) connections.
Our method improves the classification accuracy by approximately 2% (20-way 1-shot task setting) for omniglot dataset.
arXiv Detail & Related papers (2023-04-06T04:53:13Z) - Meta-Learning with Self-Improving Momentum Target [72.98879709228981]
We propose Self-improving Momentum Target (SiMT) to improve the performance of a meta-learner.
SiMT generates the target model by adapting from the temporal ensemble of the meta-learner.
We show that SiMT brings a significant performance gain when combined with a wide range of meta-learning methods.
arXiv Detail & Related papers (2022-10-11T06:45:15Z) - Adapting the Mean Teacher for keypoint-based lung registration under
geometric domain shifts [75.51482952586773]
deep neural networks generally require plenty of labeled training data and are vulnerable to domain shifts between training and test data.
We present a novel approach to geometric domain adaptation for image registration, adapting a model from a labeled source to an unlabeled target domain.
Our method consistently improves on the baseline model by 50%/47% while even matching the accuracy of models trained on target data.
arXiv Detail & Related papers (2022-07-01T12:16:42Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - Curriculum Meta-Learning for Few-shot Classification [1.5039745292757671]
We propose an adaptation of the curriculum training framework, applicable to state-of-the-art meta learning techniques for few-shot classification.
Our experiments with the MAML algorithm on two few-shot image classification tasks show significant gains with the curriculum training framework.
arXiv Detail & Related papers (2021-12-06T10:29:23Z) - SB-MTL: Score-based Meta Transfer-Learning for Cross-Domain Few-Shot
Learning [3.6398662687367973]
We present a novel, flexible and effective method to address the Cross-Domain Few-Shot Learning problem.
Our method combines transfer-learning and meta-learning by using a MAML-optimized feature encoder and a score-based Graph Neural Network.
We observe significant improvements in accuracy across 5, 20 and 50 shot, and on the four target domains.
arXiv Detail & Related papers (2020-12-03T09:29:35Z) - MetaGater: Fast Learning of Conditional Channel Gated Networks via
Federated Meta-Learning [46.79356071007187]
We propose a holistic approach to jointly train the backbone network and the channel gating.
We develop a federated meta-learning approach to jointly learn good meta-initializations for both backbone networks and gating modules.
arXiv Detail & Related papers (2020-11-25T04:26:23Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Rethinking Few-Shot Image Classification: a Good Embedding Is All You
Need? [72.00712736992618]
We show that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, outperforms state-of-the-art few-shot learning methods.
An additional boost can be achieved through the use of self-distillation.
We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms.
arXiv Detail & Related papers (2020-03-25T17:58:42Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.