vCLIMB: A Novel Video Class Incremental Learning Benchmark
- URL: http://arxiv.org/abs/2201.09381v1
- Date: Sun, 23 Jan 2022 22:14:17 GMT
- Title: vCLIMB: A Novel Video Class Incremental Learning Benchmark
- Authors: Andr\'es Villa, Kumail Alhamoud, Juan Le\'on Alc\'azar, Fabian Caba
Heilbron, Victor Escorcia and Bernard Ghanem
- Abstract summary: We introduce vCLIMB, a novel video continual learning benchmark.
vCLIMB is a standardized test-bed to analyze catastrophic forgetting of deep models in video continual learning.
We propose a temporal consistency regularization that can be applied on top of memory-based continual learning methods.
- Score: 53.90485760679411
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Continual learning (CL) is under-explored in the video domain. The few
existing works contain splits with imbalanced class distributions over the
tasks, or study the problem in unsuitable datasets. We introduce vCLIMB, a
novel video continual learning benchmark. vCLIMB is a standardized test-bed to
analyze catastrophic forgetting of deep models in video continual learning. In
contrast to previous work, we focus on class incremental continual learning
with models trained on a sequence of disjoint tasks, and distribute the number
of classes uniformly across the tasks. We perform in-depth evaluations of
existing CL methods in vCLIMB, and observe two unique challenges in video data.
The selection of instances to store in episodic memory is performed at the
frame level. Second, untrimmed training data influences the effectiveness of
frame sampling strategies. We address these two challenges by proposing a
temporal consistency regularization that can be applied on top of memory-based
continual learning methods. Our approach significantly improves the baseline,
by up to 24% on the untrimmed continual learning task. To streamline and foster
future research in video continual learning, we will publicly release the code
for our benchmark and method.
Related papers
- Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - Learning from One Continuous Video Stream [70.30084026960819]
We introduce a framework for online learning from a single continuous video stream.
This poses great challenges given the high correlation between consecutive video frames.
We employ pixel-to-pixel modelling as a practical and flexible way to switch between pre-training and single-stream evaluation.
arXiv Detail & Related papers (2023-12-01T14:03:30Z) - A Comprehensive Empirical Evaluation on Online Continual Learning [20.39495058720296]
We evaluate methods from the literature that tackle online continual learning.
We focus on the class-incremental setting in the context of image classification.
We compare these methods on the Split-CIFAR100 and Split-TinyImagenet benchmarks.
arXiv Detail & Related papers (2023-08-20T17:52:02Z) - Class-Incremental Learning: A Survey [84.30083092434938]
Class-Incremental Learning (CIL) enables the learner to incorporate the knowledge of new classes incrementally.
CIL tends to catastrophically forget the characteristics of former ones, and its performance drastically degrades.
We provide a rigorous and unified evaluation of 17 methods in benchmark image classification tasks to find out the characteristics of different algorithms.
arXiv Detail & Related papers (2023-02-07T17:59:05Z) - Learning to Prompt for Continual Learning [34.609384246149325]
This work presents a new paradigm for continual learning that aims to train a more succinct memory system without accessing task identity at test time.
Our method learns to dynamically prompt (L2P) a pre-trained model to learn tasks sequentially under different task transitions.
The objective is to optimize prompts to instruct the model prediction and explicitly manage task-invariant and task-specific knowledge while maintaining model plasticity.
arXiv Detail & Related papers (2021-12-16T06:17:07Z) - When Video Classification Meets Incremental Classes [12.322018693269952]
We propose a framework to address the challenge of textitcatastrophic forgetting forgetting.
To better it, we utilize some characteristics of videos. First, we alleviate the granularity-temporal knowledge before distillation.
Second, we propose a dual exemplar selection method to select and store representative video instances of old classes and key-frames inside videos under tight storage budget.
arXiv Detail & Related papers (2021-06-30T06:12:33Z) - Knowledge Consolidation based Class Incremental Online Learning with
Limited Data [41.87919913719975]
We propose a novel approach for class incremental online learning in a limited data setting.
We learn robust representations that are generalizable across tasks without suffering from the problems of catastrophic forgetting and overfitting.
arXiv Detail & Related papers (2021-06-12T15:18:29Z) - Bilevel Continual Learning [76.50127663309604]
We present a novel framework of continual learning named "Bilevel Continual Learning" (BCL)
Our experiments on continual learning benchmarks demonstrate the efficacy of the proposed BCL compared to many state-of-the-art methods.
arXiv Detail & Related papers (2020-07-30T16:00:23Z) - Generalized Few-Shot Video Classification with Video Retrieval and
Feature Generation [132.82884193921535]
We argue that previous methods underestimate the importance of video feature learning and propose a two-stage approach.
We show that this simple baseline approach outperforms prior few-shot video classification methods by over 20 points on existing benchmarks.
We present two novel approaches that yield further improvement.
arXiv Detail & Related papers (2020-07-09T13:05:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.