CVPR 2020 Continual Learning in Computer Vision Competition: Approaches,
Results, Current Challenges and Future Directions
- URL: http://arxiv.org/abs/2009.09929v1
- Date: Mon, 14 Sep 2020 08:53:05 GMT
- Title: CVPR 2020 Continual Learning in Computer Vision Competition: Approaches,
Results, Current Challenges and Future Directions
- Authors: Vincenzo Lomonaco, Lorenzo Pellegrini, Pau Rodriguez, Massimo Caccia,
Qi She, Yu Chen, Quentin Jodelet, Ruiping Wang, Zheda Mai, David Vazquez,
German I. Parisi, Nikhil Churamani, Marc Pickett, Issam Laradji, Davide
Maltoni
- Abstract summary: The first Continual Learning in Computer Vision challenge held at CVPR in 2020 has been one of the first opportunities to evaluate different continual learning algorithms.
We report the main results of the competition, which counted more than 79 teams registered, 11 finalists and 2300$ in prizes.
- Score: 25.791936837340877
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the last few years, we have witnessed a renewed and fast-growing interest
in continual learning with deep neural networks with the shared objective of
making current AI systems more adaptive, efficient and autonomous. However,
despite the significant and undoubted progress of the field in addressing the
issue of catastrophic forgetting, benchmarking different continual learning
approaches is a difficult task by itself. In fact, given the proliferation of
different settings, training and evaluation protocols, metrics and
nomenclature, it is often tricky to properly characterize a continual learning
algorithm, relate it to other solutions and gauge its real-world applicability.
The first Continual Learning in Computer Vision challenge held at CVPR in 2020
has been one of the first opportunities to evaluate different continual
learning algorithms on a common hardware with a large set of shared evaluation
metrics and 3 different settings based on the realistic CORe50 video benchmark.
In this paper, we report the main results of the competition, which counted
more than 79 teams registered, 11 finalists and 2300$ in prizes. We also
summarize the winning approaches, current challenges and future research
directions.
Related papers
- How green is continual learning, really? Analyzing the energy consumption in continual training of vision foundation models [10.192658261639549]
We aim to gain a systematic understanding of the energy efficiency of continual learning algorithms.
We performed experiments on three standard datasets: CIFAR-100, ImageNet-R, and DomainNet.
We propose a novel metric, the Energy NetScore, which we use measure the algorithm efficiency in terms of energy-accuracy trade-off.
arXiv Detail & Related papers (2024-09-27T11:50:10Z) - Are we making progress in unlearning? Findings from the first NeurIPS unlearning competition [70.60872754129832]
First NeurIPS competition on unlearning sought to stimulate the development of novel algorithms.
Nearly 1,200 teams from across the world participated.
We analyze top solutions and delve into discussions on benchmarking unlearning.
arXiv Detail & Related papers (2024-06-13T12:58:00Z) - Benchmarking Robustness and Generalization in Multi-Agent Systems: A
Case Study on Neural MMO [50.58083807719749]
We present the results of the second Neural MMO challenge, hosted at IJCAI 2022, which received 1600+ submissions.
This competition targets robustness and generalization in multi-agent systems.
We will open-source our benchmark including the environment wrapper, baselines, a visualization tool, and selected policies for further research.
arXiv Detail & Related papers (2023-08-30T07:16:11Z) - 3rd Continual Learning Workshop Challenge on Egocentric Category and
Instance Level Object Understanding [20.649762891903602]
This paper summarizes the ideas, design choices, rules, and results of the challenge held at the 3rd Continual Learning in Computer Vision (CLVision) Workshop at CVPR 2022.
The focus of this competition is the complex continual object detection task, which is still underexplored in literature compared to classification tasks.
arXiv Detail & Related papers (2022-12-13T11:51:03Z) - NEVIS'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision
Research [96.53307645791179]
We introduce the Never-Ending VIsual-classification Stream (NEVIS'22), a benchmark consisting of a stream of over 100 visual classification tasks.
Despite being limited to classification, the resulting stream has a rich diversity of tasks from OCR, to texture analysis, scene recognition, and so forth.
Overall, NEVIS'22 poses an unprecedented challenge for current sequential learning approaches due to the scale and diversity of tasks.
arXiv Detail & Related papers (2022-11-15T18:57:46Z) - CLAD: A realistic Continual Learning benchmark for Autonomous Driving [33.95470797472666]
This paper describes the design and the ideas motivating a new Continual Learning benchmark for Autonomous Driving.
The benchmark uses SODA10M, a recently released large-scale dataset that concerns autonomous driving related problems.
We introduce CLAD-C, an online classification benchmark realised through a chronological data stream that poses both class and domain incremental challenges.
We examine the inherent difficulties and challenges posed by the benchmark, through a survey of the techniques and methods used by the top-3 participants in a CLAD-challenge workshop at ICCV 2021.
arXiv Detail & Related papers (2022-10-07T12:08:25Z) - 2021 BEETL Competition: Advancing Transfer Learning for Subject
Independence & Heterogenous EEG Data Sets [89.84774119537087]
We design two transfer learning challenges around diagnostics and Brain-Computer-Interfacing (BCI)
Task 1 is centred on medical diagnostics, addressing automatic sleep stage annotation across subjects.
Task 2 is centred on Brain-Computer Interfacing (BCI), addressing motor imagery decoding across both subjects and data sets.
arXiv Detail & Related papers (2022-02-14T12:12:20Z) - The MineRL 2020 Competition on Sample Efficient Reinforcement Learning
using Human Priors [62.9301667732188]
We propose a second iteration of the MineRL Competition.
The primary goal of the competition is to foster the development of algorithms which can efficiently leverage human demonstrations.
The competition is structured into two rounds in which competitors are provided several paired versions of the dataset and environment.
At the end of each round, competitors submit containerized versions of their learning algorithms to the AIcrowd platform.
arXiv Detail & Related papers (2021-01-26T20:32:30Z) - Analysing Affective Behavior in the First ABAW 2020 Competition [49.90617840789334]
The Affective Behavior Analysis in-the-wild (ABAW) 2020 Competition is the first Competition aiming at automatic analysis of the three main behavior tasks.
We describe this Competition, to be held in conjunction with the IEEE Conference on Face and Gesture Recognition, May 2020, in Buenos Aires, Argentina.
We outline the evaluation metrics, present both the baseline system and the top-3 performing teams' methodologies per Challenge and finally present their obtained results.
arXiv Detail & Related papers (2020-01-30T15:41:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.