Avalanche: an End-to-End Library for Continual Learning
- URL: http://arxiv.org/abs/2104.00405v1
- Date: Thu, 1 Apr 2021 11:31:46 GMT
- Title: Avalanche: an End-to-End Library for Continual Learning
- Authors: Vincenzo Lomonaco, Lorenzo Pellegrini, Andrea Cossu, Antonio Carta,
Gabriele Graffieti, Tyler L. Hayes, Matthias De Lange, Marc Masana, Jary
Pomponi, Gido van de Ven, Martin Mundt, Qi She, Keiland Cooper, Jeremy
Forest, Eden Belouadah, Simone Calderara, German I. Parisi, Fabio Cuzzolin,
Andreas Tolias, Simone Scardapane, Luca Antiga, Subutai Amhad, Adrian
Popescu, Christopher Kanan, Joost van de Weijer, Tinne Tuytelaars, Davide
Bacciu, Davide Maltoni
- Abstract summary: We propose Avalanche, an open-source library for continual learning research based on PyTorch.
Avalanche is designed to provide a shared and collaborative for fast prototyping, training, and reproducible evaluation of continual learning algorithms.
- Score: 81.84325803942811
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning continually from non-stationary data streams is a long-standing goal
and a challenging problem in machine learning. Recently, we have witnessed a
renewed and fast-growing interest in continual learning, especially within the
deep learning community. However, algorithmic solutions are often difficult to
re-implement, evaluate and port across different settings, where even results
on standard benchmarks are hard to reproduce. In this work, we propose
Avalanche, an open-source end-to-end library for continual learning research
based on PyTorch. Avalanche is designed to provide a shared and collaborative
codebase for fast prototyping, training, and reproducible evaluation of
continual learning algorithms.
Related papers
- Continual Learning with Deep Streaming Regularized Discriminant Analysis [0.0]
We propose a streaming version of regularized discriminant analysis as a solution to this challenge.
We combine our algorithm with a convolutional neural network and demonstrate that it outperforms both batch learning and existing streaming learning algorithms.
arXiv Detail & Related papers (2023-09-15T12:25:42Z) - A Comprehensive Empirical Evaluation on Online Continual Learning [20.39495058720296]
We evaluate methods from the literature that tackle online continual learning.
We focus on the class-incremental setting in the context of image classification.
We compare these methods on the Split-CIFAR100 and Split-TinyImagenet benchmarks.
arXiv Detail & Related papers (2023-08-20T17:52:02Z) - Katakomba: Tools and Benchmarks for Data-Driven NetHack [52.0035089982277]
NetHack is known as the frontier of reinforcement learning research.
We argue that there are three major obstacles for adoption: resource-wise, implementation-wise, and benchmark-wise.
We develop an open-source library that provides workflow fundamentals familiar to the offline reinforcement learning community.
arXiv Detail & Related papers (2023-06-14T22:50:25Z) - SequeL: A Continual Learning Library in PyTorch and JAX [50.33956216274694]
SequeL is a library for Continual Learning that supports both PyTorch and JAX frameworks.
It provides a unified interface for a wide range of Continual Learning algorithms, including regularization-based approaches, replay-based approaches, and hybrid approaches.
We release SequeL as an open-source library, enabling researchers and developers to easily experiment and extend the library for their own purposes.
arXiv Detail & Related papers (2023-04-21T10:00:22Z) - Avalanche: A PyTorch Library for Deep Continual Learning [12.238684710313168]
Continual learning is the problem of learning from a nonstationary stream of data.
Avalanche is an open source library maintained by the ContinualAI non-profit organization.
arXiv Detail & Related papers (2023-02-02T10:45:20Z) - NEVIS'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision
Research [96.53307645791179]
We introduce the Never-Ending VIsual-classification Stream (NEVIS'22), a benchmark consisting of a stream of over 100 visual classification tasks.
Despite being limited to classification, the resulting stream has a rich diversity of tasks from OCR, to texture analysis, scene recognition, and so forth.
Overall, NEVIS'22 poses an unprecedented challenge for current sequential learning approaches due to the scale and diversity of tasks.
arXiv Detail & Related papers (2022-11-15T18:57:46Z) - Online Continual Learning with Natural Distribution Shifts: An Empirical
Study with Visual Data [101.6195176510611]
"Online" continual learning enables evaluating both information retention and online learning efficacy.
In online continual learning, each incoming small batch of data is first used for testing and then added to the training set, making the problem truly online.
We introduce a new benchmark for online continual visual learning that exhibits large scale and natural distribution shifts.
arXiv Detail & Related papers (2021-08-20T06:17:20Z) - Bayesian active learning for production, a systematic study and a
reusable library [85.32971950095742]
In this paper, we analyse the main drawbacks of current active learning techniques.
We do a systematic study on the effects of the most common issues of real-world datasets on the deep active learning process.
We derive two techniques that can speed up the active learning loop such as partial uncertainty sampling and larger query size.
arXiv Detail & Related papers (2020-06-17T14:51:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.