NeurIPS'22 Cross-Domain MetaDL competition: Design and baseline results
- URL: http://arxiv.org/abs/2208.14686v1
- Date: Wed, 31 Aug 2022 08:31:02 GMT
- Title: NeurIPS'22 Cross-Domain MetaDL competition: Design and baseline results
- Authors: Dustin Carri\'on-Ojeda (LISN, TAU), Hong Chen (CST), Adrian El Baz,
Sergio Escalera (CVC), Chaoyu Guan (CST), Isabelle Guyon (LISN, TAU), Ihsan
Ullah (LISN, TAU), Xin Wang (CST), Wenwu Zhu (CST)
- Abstract summary: We present the design and baseline results for a new challenge in the ChaLearn meta-learning series, accepted at NeurIPS'22.
This competition challenges the participants to solve "any-way" and "any-shot" problems drawn from various domains.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present the design and baseline results for a new challenge in the
ChaLearn meta-learning series, accepted at NeurIPS'22, focusing on
"cross-domain" meta-learning. Meta-learning aims to leverage experience gained
from previous tasks to solve new tasks efficiently (i.e., with better
performance, little training data, and/or modest computational resources).
While previous challenges in the series focused on within-domain few-shot
learning problems, with the aim of learning efficiently N-way k-shot tasks
(i.e., N class classification problems with k training examples), this
competition challenges the participants to solve "any-way" and "any-shot"
problems drawn from various domains (healthcare, ecology, biology,
manufacturing, and others), chosen for their humanitarian and societal impact.
To that end, we created Meta-Album, a meta-dataset of 40 image classification
datasets from 10 domains, from which we carve out tasks with any number of
"ways" (within the range 2-20) and any number of "shots" (within the range
1-20). The competition is with code submission, fully blind-tested on the
CodaLab challenge platform. The code of the winners will be open-sourced,
enabling the deployment of automated machine learning solutions for few-shot
image classification across several domains.
Related papers
- Learning Site-specific Styles for Multi-institutional Unsupervised
Cross-modality Domain Adaptation [7.282377515210211]
We present our solution to tackle the multi-institutional unsupervised domain adaptation for the crossMoDA 2023 challenge.
Our solution achieved the 1st place during both the validation and testing phases of the challenge.
arXiv Detail & Related papers (2023-11-21T08:47:08Z) - Unsupervised Meta-Learning via Few-shot Pseudo-supervised Contrastive
Learning [72.3506897990639]
We propose a simple yet effective unsupervised meta-learning framework, coined Pseudo-supervised Contrast (PsCo) for few-shot classification.
PsCo outperforms existing unsupervised meta-learning methods under various in-domain and cross-domain few-shot classification benchmarks.
arXiv Detail & Related papers (2023-03-02T06:10:13Z) - NEVIS'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision
Research [96.53307645791179]
We introduce the Never-Ending VIsual-classification Stream (NEVIS'22), a benchmark consisting of a stream of over 100 visual classification tasks.
Despite being limited to classification, the resulting stream has a rich diversity of tasks from OCR, to texture analysis, scene recognition, and so forth.
Overall, NEVIS'22 poses an unprecedented challenge for current sequential learning approaches due to the scale and diversity of tasks.
arXiv Detail & Related papers (2022-11-15T18:57:46Z) - Extending nnU-Net is all you need [2.1729722043371016]
We use nnU-Net to participate in the AMOS2022 challenge, which comes with a unique set of tasks.
The dataset is one of the largest ever created and boasts 15 target structures.
Our final ensemble achieves Dice scores of 90.13 for Task 1 (CT) and 89.06 for Task 2 (CT+MRI) in a 5-fold cross-validation.
arXiv Detail & Related papers (2022-08-23T07:54:29Z) - Continual Prune-and-Select: Class-incremental learning with specialized
subnetworks [66.4795381419701]
Continual-Prune-and-Select (CP&S) is capable of sequentially learning 10 tasks from ImageNet-1000 keeping an accuracy around 94% with negligible forgetting.
This is a first-of-its-kind result in class-incremental learning.
arXiv Detail & Related papers (2022-08-09T10:49:40Z) - Lessons learned from the NeurIPS 2021 MetaDL challenge: Backbone
fine-tuning without episodic meta-learning dominates for few-shot learning
image classification [40.901760230639496]
We describe the design of the MetaDL competition series, the datasets, the best experimental results, and the top-ranked methods in the NeurIPS 2021 challenge.
The solutions of the top participants have been open-sourced.
arXiv Detail & Related papers (2022-06-15T10:27:23Z) - Advances in MetaDL: AAAI 2021 challenge and workshop [0.0]
This paper presents the design of the challenge and its results, and summarizes made presentations at the workshop.
The challenge focused on few-shot learning classification tasks of small images.
Winning methods featured various classifiers trained on top of the second last layer of popular CNN backbones, fined-tuned on the meta-training data, then trained on the labeled support and tested on the unlabeled query sets of the meta-test data.
arXiv Detail & Related papers (2022-02-01T07:46:36Z) - Winning solutions and post-challenge analyses of the ChaLearn AutoDL
challenge 2019 [112.36155380260655]
This paper reports the results and post-challenge analyses of ChaLearn's AutoDL challenge series.
Results show that DL methods dominated, though popular Neural Architecture Search (NAS) was impractical.
A high level modular organization emerged featuring a "meta-learner", "data ingestor", "model selector", "model/learner", and "evaluator"
arXiv Detail & Related papers (2022-01-11T06:21:18Z) - Cross-Domain Few-Shot Classification via Adversarial Task Augmentation [16.112554109446204]
Few-shot classification aims to recognize unseen classes with few labeled samples from each class.
Many meta-learning models for few-shot classification elaborately design various task-shared inductive bias (meta-knowledge) to solve such tasks.
In this work, we aim to improve the robustness of the inductive bias through task augmentation.
arXiv Detail & Related papers (2021-04-29T14:51:53Z) - LID 2020: The Learning from Imperfect Data Challenge Results [242.86700551532272]
Learning from Imperfect Data workshop aims to inspire and facilitate the research in developing novel approaches.
We organize three challenges to find the state-of-the-art approaches in weakly supervised learning setting.
This technical report summarizes the highlights from the challenge.
arXiv Detail & Related papers (2020-10-17T13:06:12Z) - Expert Training: Task Hardness Aware Meta-Learning for Few-Shot
Classification [62.10696018098057]
We propose an easy-to-hard expert meta-training strategy to arrange the training tasks properly.
A task hardness aware module is designed and integrated into the training procedure to estimate the hardness of a task.
Experimental results on the miniImageNet and tieredImageNetSketch datasets show that the meta-learners can obtain better results with our expert training strategy.
arXiv Detail & Related papers (2020-07-13T08:49:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.