Advances in MetaDL: AAAI 2021 challenge and workshop
- URL: http://arxiv.org/abs/2202.01890v1
- Date: Tue, 1 Feb 2022 07:46:36 GMT
- Title: Advances in MetaDL: AAAI 2021 challenge and workshop
- Authors: Adrian El Baz, Isabelle Guyon (TAU), Zhengying Liu (TAU), Jan van Rijn
(LIACS), Sebastien Treguer, Joaquin Vanschoren (TU/e)
- Abstract summary: This paper presents the design of the challenge and its results, and summarizes made presentations at the workshop.
The challenge focused on few-shot learning classification tasks of small images.
Winning methods featured various classifiers trained on top of the second last layer of popular CNN backbones, fined-tuned on the meta-training data, then trained on the labeled support and tested on the unlabeled query sets of the meta-test data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To stimulate advances in metalearning using deep learning techniques
(MetaDL), we organized in 2021 a challenge and an associated workshop. This
paper presents the design of the challenge and its results, and summarizes
presentations made at the workshop. The challenge focused on few-shot learning
classification tasks of small images. Participants' code submissions were run
in a uniform manner, under tight computational constraints. This put pressure
on solution designs to use existing architecture backbones and/or pre-trained
networks. Winning methods featured various classifiers trained on top of the
second last layer of popular CNN backbones, fined-tuned on the meta-training
data (not necessarily in an episodic manner), then trained on the labeled
support and tested on the unlabeled query sets of the meta-test data.
Related papers
- Boosting Meta-Training with Base Class Information for Few-Shot Learning [35.144099160883606]
We propose an end-to-end training paradigm consisting of two alternative loops.
In the outer loop, we calculate cross entropy loss on the entire training set while updating only the final linear layer.
This training paradigm not only converges quickly but also outperforms existing baselines, indicating that information from the overall training set and the meta-learning training paradigm could mutually reinforce one another.
arXiv Detail & Related papers (2024-03-06T05:13:23Z) - Architecture, Dataset and Model-Scale Agnostic Data-free Meta-Learning [119.70303730341938]
We propose ePisode cUrriculum inveRsion (ECI) during data-free meta training and invErsion calibRation following inner loop (ICFIL) during meta testing.
ECI adaptively increases the difficulty level of pseudo episodes according to the real-time feedback of the meta model.
We formulate the optimization process of meta training with ECI as an adversarial form in an end-to-end manner.
arXiv Detail & Related papers (2023-03-20T15:10:41Z) - Unsupervised Meta-Learning via Few-shot Pseudo-supervised Contrastive
Learning [72.3506897990639]
We propose a simple yet effective unsupervised meta-learning framework, coined Pseudo-supervised Contrast (PsCo) for few-shot classification.
PsCo outperforms existing unsupervised meta-learning methods under various in-domain and cross-domain few-shot classification benchmarks.
arXiv Detail & Related papers (2023-03-02T06:10:13Z) - NeurIPS'22 Cross-Domain MetaDL competition: Design and baseline results [0.0]
We present the design and baseline results for a new challenge in the ChaLearn meta-learning series, accepted at NeurIPS'22.
This competition challenges the participants to solve "any-way" and "any-shot" problems drawn from various domains.
arXiv Detail & Related papers (2022-08-31T08:31:02Z) - Lessons learned from the NeurIPS 2021 MetaDL challenge: Backbone
fine-tuning without episodic meta-learning dominates for few-shot learning
image classification [40.901760230639496]
We describe the design of the MetaDL competition series, the datasets, the best experimental results, and the top-ranked methods in the NeurIPS 2021 challenge.
The solutions of the top participants have been open-sourced.
arXiv Detail & Related papers (2022-06-15T10:27:23Z) - Team Cogitat at NeurIPS 2021: Benchmarks for EEG Transfer Learning
Competition [55.34407717373643]
Building subject-independent deep learning models for EEG decoding faces the challenge of strong co-shift.
Our approach is to explicitly align feature distributions at various layers of the deep learning model.
The methodology won first place in the 2021 Benchmarks in EEG Transfer Learning competition, hosted at the NeurIPS conference.
arXiv Detail & Related papers (2022-02-01T11:11:08Z) - Few-shot Action Recognition with Prototype-centered Attentive Learning [88.10852114988829]
Prototype-centered Attentive Learning (PAL) model composed of two novel components.
First, a prototype-centered contrastive learning loss is introduced to complement the conventional query-centered learning objective.
Second, PAL integrates a attentive hybrid learning mechanism that can minimize the negative impacts of outliers.
arXiv Detail & Related papers (2021-01-20T11:48:12Z) - Expert Training: Task Hardness Aware Meta-Learning for Few-Shot
Classification [62.10696018098057]
We propose an easy-to-hard expert meta-training strategy to arrange the training tasks properly.
A task hardness aware module is designed and integrated into the training procedure to estimate the hardness of a task.
Experimental results on the miniImageNet and tieredImageNetSketch datasets show that the meta-learners can obtain better results with our expert training strategy.
arXiv Detail & Related papers (2020-07-13T08:49:00Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.