SimCS: Simulation for Domain Incremental Online Continual Segmentation
- URL: http://arxiv.org/abs/2211.16234v2
- Date: Thu, 15 Feb 2024 12:12:05 GMT
- Title: SimCS: Simulation for Domain Incremental Online Continual Segmentation
- Authors: Motasem Alfarra, Zhipeng Cai, Adel Bibi, Bernard Ghanem, Matthias
M\"uller
- Abstract summary: Existing continual learning approaches mostly focus on image classification in the class-incremental setup.
We propose SimCS, a parameter-free method complementary to existing ones that uses simulated data to regularize continual learning.
- Score: 60.18777113752866
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Continual Learning is a step towards lifelong intelligence where models
continuously learn from recently collected data without forgetting previous
knowledge. Existing continual learning approaches mostly focus on image
classification in the class-incremental setup with clear task boundaries and
unlimited computational budget. This work explores the problem of Online
Domain-Incremental Continual Segmentation (ODICS), where the model is
continually trained over batches of densely labeled images from different
domains, with limited computation and no information about the task boundaries.
ODICS arises in many practical applications. In autonomous driving, this may
correspond to the realistic scenario of training a segmentation model over time
on a sequence of cities. We analyze several existing continual learning methods
and show that they perform poorly in this setting despite working well in
class-incremental segmentation. We propose SimCS, a parameter-free method
complementary to existing ones that uses simulated data to regularize continual
learning. Experiments show that SimCS provides consistent improvements when
combined with different CL methods.
Related papers
- ECLIPSE: Efficient Continual Learning in Panoptic Segmentation with Visual Prompt Tuning [54.68180752416519]
Panoptic segmentation is a cutting-edge computer vision task.
We introduce a novel and efficient method for continual panoptic segmentation based on Visual Prompt Tuning, dubbed ECLIPSE.
Our approach involves freezing the base model parameters and fine-tuning only a small set of prompt embeddings, addressing both catastrophic forgetting and plasticity.
arXiv Detail & Related papers (2024-03-29T11:31:12Z) - Action-Quantized Offline Reinforcement Learning for Robotic Skill
Learning [68.16998247593209]
offline reinforcement learning (RL) paradigm provides recipe to convert static behavior datasets into policies that can perform better than the policy that collected the data.
In this paper, we propose an adaptive scheme for action quantization.
We show that several state-of-the-art offline RL methods such as IQL, CQL, and BRAC improve in performance on benchmarks when combined with our proposed discretization scheme.
arXiv Detail & Related papers (2023-10-18T06:07:10Z) - Mitigating Catastrophic Forgetting in Task-Incremental Continual
Learning with Adaptive Classification Criterion [50.03041373044267]
We propose a Supervised Contrastive learning framework with adaptive classification criterion for Continual Learning.
Experiments show that CFL achieves state-of-the-art performance and has a stronger ability to overcome compared with the classification baselines.
arXiv Detail & Related papers (2023-05-20T19:22:40Z) - Continual Vision-Language Representation Learning with Off-Diagonal
Information [112.39419069447902]
Multi-modal contrastive learning frameworks like CLIP typically require a large amount of image-text samples for training.
This paper discusses the feasibility of continual CLIP training using streaming data.
arXiv Detail & Related papers (2023-05-11T08:04:46Z) - Building a Subspace of Policies for Scalable Continual Learning [21.03369477853538]
We introduce Continual Subspace of Policies (CSP), a new approach that incrementally builds a subspace of policies for training a reinforcement learning agent on a sequence of tasks.
CSP outperforms a number of popular baselines on a wide range of scenarios from two challenging domains, Brax (locomotion) and Continual World (manipulation)
arXiv Detail & Related papers (2022-11-18T14:59:42Z) - Learning an evolved mixture model for task-free continual learning [11.540150938141034]
We address the Task-Free Continual Learning (TFCL) in which a model is trained on non-stationary data streams with no explicit task information.
We introduce two simple dropout mechanisms to selectively remove stored examples in order to avoid memory overload.
arXiv Detail & Related papers (2022-07-11T16:01:27Z) - vCLIMB: A Novel Video Class Incremental Learning Benchmark [53.90485760679411]
We introduce vCLIMB, a novel video continual learning benchmark.
vCLIMB is a standardized test-bed to analyze catastrophic forgetting of deep models in video continual learning.
We propose a temporal consistency regularization that can be applied on top of memory-based continual learning methods.
arXiv Detail & Related papers (2022-01-23T22:14:17Z) - Online Constrained Model-based Reinforcement Learning [13.362455603441552]
Key requirement is the ability to handle continuous state and action spaces while remaining within a limited time and resource budget.
We propose a model based approach that combines Gaussian Process regression and Receding Horizon Control.
We test our approach on a cart pole swing-up environment and demonstrate the benefits of online learning on an autonomous racing task.
arXiv Detail & Related papers (2020-04-07T15:51:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.