NEVIS'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision
Research
- URL: http://arxiv.org/abs/2211.11747v2
- Date: Tue, 16 May 2023 22:15:39 GMT
- Title: NEVIS'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision
Research
- Authors: Jorg Bornschein, Alexandre Galashov, Ross Hemsley, Amal Rannen-Triki,
Yutian Chen, Arslan Chaudhry, Xu Owen He, Arthur Douillard, Massimo Caccia,
Qixuang Feng, Jiajun Shen, Sylvestre-Alvise Rebuffi, Kitty Stacpoole, Diego
de las Casas, Will Hawkins, Angeliki Lazaridou, Yee Whye Teh, Andrei A. Rusu,
Razvan Pascanu and Marc'Aurelio Ranzato
- Abstract summary: We introduce the Never-Ending VIsual-classification Stream (NEVIS'22), a benchmark consisting of a stream of over 100 visual classification tasks.
Despite being limited to classification, the resulting stream has a rich diversity of tasks from OCR, to texture analysis, scene recognition, and so forth.
Overall, NEVIS'22 poses an unprecedented challenge for current sequential learning approaches due to the scale and diversity of tasks.
- Score: 96.53307645791179
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A shared goal of several machine learning communities like continual
learning, meta-learning and transfer learning, is to design algorithms and
models that efficiently and robustly adapt to unseen tasks. An even more
ambitious goal is to build models that never stop adapting, and that become
increasingly more efficient through time by suitably transferring the accrued
knowledge. Beyond the study of the actual learning algorithm and model
architecture, there are several hurdles towards our quest to build such models,
such as the choice of learning protocol, metric of success and data needed to
validate research hypotheses. In this work, we introduce the Never-Ending
VIsual-classification Stream (NEVIS'22), a benchmark consisting of a stream of
over 100 visual classification tasks, sorted chronologically and extracted from
papers sampled uniformly from computer vision proceedings spanning the last
three decades. The resulting stream reflects what the research community
thought was meaningful at any point in time, and it serves as an ideal test bed
to assess how well models can adapt to new tasks, and do so better and more
efficiently as time goes by. Despite being limited to classification, the
resulting stream has a rich diversity of tasks from OCR, to texture analysis,
scene recognition, and so forth. The diversity is also reflected in the wide
range of dataset sizes, spanning over four orders of magnitude. Overall,
NEVIS'22 poses an unprecedented challenge for current sequential learning
approaches due to the scale and diversity of tasks, yet with a low entry
barrier as it is limited to a single modality and well understood supervised
learning problems. Moreover, we provide a reference implementation including
strong baselines and an evaluation protocol to compare methods in terms of
their trade-off between accuracy and compute.
Related papers
- A Multitask Deep Learning Model for Classification and Regression of Hyperspectral Images: Application to the large-scale dataset [44.94304541427113]
We propose a multitask deep learning model to perform multiple classification and regression tasks simultaneously on hyperspectral images.
We validated our approach on a large hyperspectral dataset called TAIGA.
A comprehensive qualitative and quantitative analysis of the results shows that the proposed method significantly outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-23T11:14:54Z) - Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - A Shapelet-based Framework for Unsupervised Multivariate Time Series Representation Learning [29.511632089649552]
We propose a novel URL framework for multivariate time series by learning time-series-specific shapelet-based representation.
To the best of our knowledge, this is the first work that explores the shapelet-based embedding in the unsupervised general-purpose representation learning.
A unified shapelet-based encoder and a novel learning objective with multi-grained contrasting and multi-scale alignment are particularly designed to achieve our goal.
arXiv Detail & Related papers (2023-05-30T09:31:57Z) - Hierarchically Structured Task-Agnostic Continual Learning [0.0]
We take a task-agnostic view of continual learning and develop a hierarchical information-theoretic optimality principle.
We propose a neural network layer, called the Mixture-of-Variational-Experts layer, that alleviates forgetting by creating a set of information processing paths.
Our approach can operate in a task-agnostic way, i.e., it does not require task-specific knowledge, as is the case with many existing continual learning algorithms.
arXiv Detail & Related papers (2022-11-14T19:53:15Z) - An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale
Multitask Learning Systems [4.675744559395732]
Multitask learning assumes that models capable of learning from multiple tasks can achieve better quality and efficiency via knowledge transfer.
State of the art ML models rely on high customization for each task and leverage size and data scale rather than scaling the number of tasks.
We propose an evolutionary method that can generate a large scale multitask model and can support the dynamic and continuous addition of new tasks.
arXiv Detail & Related papers (2022-05-25T13:10:47Z) - Continual Object Detection via Prototypical Task Correlation Guided
Gating Mechanism [120.1998866178014]
We present a flexible framework for continual object detection via pRotOtypical taSk corrElaTion guided gaTingAnism (ROSETTA)
Concretely, a unified framework is shared by all tasks while task-aware gates are introduced to automatically select sub-models for specific tasks.
Experiments on COCO-VOC, KITTI-Kitchen, class-incremental detection on VOC and sequential learning of four tasks show that ROSETTA yields state-of-the-art performance.
arXiv Detail & Related papers (2022-05-06T07:31:28Z) - What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks? [59.51457877578138]
We study contrastive learning on the wearable-based activity recognition task.
This paper presents an open-source PyTorch library textttCL-HAR, which can serve as a practical tool for researchers.
arXiv Detail & Related papers (2022-02-12T06:10:15Z) - The Effect of Diversity in Meta-Learning [79.56118674435844]
Few-shot learning aims to learn representations that can tackle novel tasks given a small number of examples.
Recent studies show that task distribution plays a vital role in the model's performance.
We study different task distributions on a myriad of models and datasets to evaluate the effect of task diversity on meta-learning algorithms.
arXiv Detail & Related papers (2022-01-27T19:39:07Z) - A System for Real-Time Interactive Analysis of Deep Learning Training [66.06880335222529]
Currently available systems are limited to monitoring only the logged data that must be specified before the training process starts.
We present a new system that enables users to perform interactive queries on live processes generating real-time information.
arXiv Detail & Related papers (2020-01-05T11:33:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.