Lessons learned from the NeurIPS 2021 MetaDL challenge: Backbone
fine-tuning without episodic meta-learning dominates for few-shot learning
image classification
- URL: http://arxiv.org/abs/2206.08138v1
- Date: Wed, 15 Jun 2022 10:27:23 GMT
- Title: Lessons learned from the NeurIPS 2021 MetaDL challenge: Backbone
fine-tuning without episodic meta-learning dominates for few-shot learning
image classification
- Authors: Adrian El Baz, Andr\'e Carvalho, Hong Chen, Fabio Ferreira, Henry
Gouk, Shell Hu, Frank Hutter, Zhengying Liu, Felix Mohr, Jan van Rijn, Xin
Wang, Isabelle Guyon (TAU, LISN)
- Abstract summary: We describe the design of the MetaDL competition series, the datasets, the best experimental results, and the top-ranked methods in the NeurIPS 2021 challenge.
The solutions of the top participants have been open-sourced.
- Score: 40.901760230639496
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although deep neural networks are capable of achieving performance superior
to humans on various tasks, they are notorious for requiring large amounts of
data and computing resources, restricting their success to domains where such
resources are available. Metalearning methods can address this problem by
transferring knowledge from related tasks, thus reducing the amount of data and
computing resources needed to learn new tasks. We organize the MetaDL
competition series, which provide opportunities for research groups all over
the world to create and experimentally assess new meta-(deep)learning solutions
for real problems. In this paper, authored collaboratively between the
competition organizers and the top-ranked participants, we describe the design
of the competition, the datasets, the best experimental results, as well as the
top-ranked methods in the NeurIPS 2021 challenge, which attracted 15 active
teams who made it to the final phase (by outperforming the baseline), making
over 100 code submissions during the feedback phase. The solutions of the top
participants have been open-sourced. The lessons learned include that learning
good representations is essential for effective transfer learning.
Related papers
- Reinforcement Learning Based Multi-modal Feature Fusion Network for
Novel Class Discovery [47.28191501836041]
In this paper, we employ a Reinforcement Learning framework to simulate the cognitive processes of humans.
We also deploy a Member-to-Leader Multi-Agent framework to extract and fuse features from multi-modal information.
We demonstrate the performance of our approach in both the 3D and 2D domains by employing the OS-MN40, OS-MN40-Miss, and Cifar10 datasets.
arXiv Detail & Related papers (2023-08-26T07:55:32Z) - NLP-LTU at SemEval-2023 Task 10: The Impact of Data Augmentation and
Semi-Supervised Learning Techniques on Text Classification Performance on an
Imbalanced Dataset [1.3445335428144554]
We propose a methodology for task 10 of SemEval23, focusing on detecting and classifying online sexism in social media posts.
Our solution for this task is based on an ensemble of fine-tuned transformer-based models.
arXiv Detail & Related papers (2023-04-25T14:19:46Z) - NEVIS'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision
Research [96.53307645791179]
We introduce the Never-Ending VIsual-classification Stream (NEVIS'22), a benchmark consisting of a stream of over 100 visual classification tasks.
Despite being limited to classification, the resulting stream has a rich diversity of tasks from OCR, to texture analysis, scene recognition, and so forth.
Overall, NEVIS'22 poses an unprecedented challenge for current sequential learning approaches due to the scale and diversity of tasks.
arXiv Detail & Related papers (2022-11-15T18:57:46Z) - NeurIPS'22 Cross-Domain MetaDL competition: Design and baseline results [0.0]
We present the design and baseline results for a new challenge in the ChaLearn meta-learning series, accepted at NeurIPS'22.
This competition challenges the participants to solve "any-way" and "any-shot" problems drawn from various domains.
arXiv Detail & Related papers (2022-08-31T08:31:02Z) - Continual Prune-and-Select: Class-incremental learning with specialized
subnetworks [66.4795381419701]
Continual-Prune-and-Select (CP&S) is capable of sequentially learning 10 tasks from ImageNet-1000 keeping an accuracy around 94% with negligible forgetting.
This is a first-of-its-kind result in class-incremental learning.
arXiv Detail & Related papers (2022-08-09T10:49:40Z) - Retrospective on the 2021 BASALT Competition on Learning from Human
Feedback [92.37243979045817]
The goal of the competition was to promote research towards agents that use learning from human feedback (LfHF) techniques to solve open-world tasks.
Rather than mandating the use of LfHF techniques, we described four tasks in natural language to be accomplished in the video game Minecraft.
Teams developed a diverse range of LfHF algorithms across a variety of possible human feedback types.
arXiv Detail & Related papers (2022-04-14T17:24:54Z) - Advances in MetaDL: AAAI 2021 challenge and workshop [0.0]
This paper presents the design of the challenge and its results, and summarizes made presentations at the workshop.
The challenge focused on few-shot learning classification tasks of small images.
Winning methods featured various classifiers trained on top of the second last layer of popular CNN backbones, fined-tuned on the meta-training data, then trained on the labeled support and tested on the unlabeled query sets of the meta-test data.
arXiv Detail & Related papers (2022-02-01T07:46:36Z) - Winning solutions and post-challenge analyses of the ChaLearn AutoDL
challenge 2019 [112.36155380260655]
This paper reports the results and post-challenge analyses of ChaLearn's AutoDL challenge series.
Results show that DL methods dominated, though popular Neural Architecture Search (NAS) was impractical.
A high level modular organization emerged featuring a "meta-learner", "data ingestor", "model selector", "model/learner", and "evaluator"
arXiv Detail & Related papers (2022-01-11T06:21:18Z) - ZeroVL: A Strong Baseline for Aligning Vision-Language Representations
with Limited Resources [13.30815073857842]
We provide a comprehensive training guidance, which allows us to conduct dual-encoder multi-modal representation alignment with limited resources.
We collect 100M web data for pre-training, and achieve comparable or superior results than state-of-the-art methods.
Our code and pre-trained models will be released to facilitate the research community.
arXiv Detail & Related papers (2021-12-17T05:40:28Z) - The MineRL 2020 Competition on Sample Efficient Reinforcement Learning
using Human Priors [62.9301667732188]
We propose a second iteration of the MineRL Competition.
The primary goal of the competition is to foster the development of algorithms which can efficiently leverage human demonstrations.
The competition is structured into two rounds in which competitors are provided several paired versions of the dataset and environment.
At the end of each round, competitors submit containerized versions of their learning algorithms to the AIcrowd platform.
arXiv Detail & Related papers (2021-01-26T20:32:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.