International Workshop on Continual Semi-Supervised Learning:
Introduction, Benchmarks and Baselines
- URL: http://arxiv.org/abs/2110.14613v1
- Date: Wed, 27 Oct 2021 17:34:40 GMT
- Title: International Workshop on Continual Semi-Supervised Learning:
Introduction, Benchmarks and Baselines
- Authors: Ajmal Shahbaz, Salman Khan, Mohammad Asiful Hossain, Vincenzo
Lomonaco, Kevin Cannons, Zhan Xu and Fabio Cuzzolin
- Abstract summary: The aim of this paper is to formalize a new continual semi-supervised learning (CSSL) paradigm.
The paper introduces two new benchmarks specifically designed to assess CSSL on two important computer vision tasks.
We describe the Continual Activity Recognition (CAR) and Continual Crowd Counting (CCC) challenges built upon those benchmarks, the baseline models proposed for the challenges, and describe a simple CSSL baseline.
- Score: 20.852277473776617
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The aim of this paper is to formalize a new continual semi-supervised
learning (CSSL) paradigm, proposed to the attention of the machine learning
community via the IJCAI 2021 International Workshop on Continual
Semi-Supervised Learning (CSSL-IJCAI), with the aim of raising field awareness
about this problem and mobilizing its effort in this direction. After a formal
definition of continual semi-supervised learning and the appropriate training
and testing protocols, the paper introduces two new benchmarks specifically
designed to assess CSSL on two important computer vision tasks: activity
recognition and crowd counting. We describe the Continual Activity Recognition
(CAR) and Continual Crowd Counting (CCC) challenges built upon those
benchmarks, the baseline models proposed for the challenges, and describe a
simple CSSL baseline which consists in applying batch self-training in temporal
sessions, for a limited number of rounds. The results show that learning from
unlabelled data streams is extremely challenging, and stimulate the search for
methods that can encode the dynamics of the data stream.
Related papers
- Investigating the Pre-Training Dynamics of In-Context Learning: Task Recognition vs. Task Learning [99.05401042153214]
In-context learning (ICL) is potentially attributed to two major abilities: task recognition (TR) and task learning (TL)
We take the first step by examining the pre-training dynamics of the emergence of ICL.
We propose a simple yet effective method to better integrate these two abilities for ICL at inference time.
arXiv Detail & Related papers (2024-06-20T06:37:47Z) - Recent Advances of Foundation Language Models-based Continual Learning: A Survey [31.171203978742447]
Foundation language models (LMs) have marked significant achievements in the domains of natural language processing (NLP) and computer vision (CV)
However, they can not emulate human-like continuous learning due to catastrophic forgetting.
Various continual learning (CL)-based methodologies have been developed to refine LMs, enabling them to adapt to new tasks without forgetting previous knowledge.
arXiv Detail & Related papers (2024-05-28T23:32:46Z) - Scalable Language Model with Generalized Continual Learning [58.700439919096155]
The Joint Adaptive Re-ization (JARe) is integrated with Dynamic Task-related Knowledge Retrieval (DTKR) to enable adaptive adjustment of language models based on specific downstream tasks.
Our method demonstrates state-of-the-art performance on diverse backbones and benchmarks, achieving effective continual learning in both full-set and few-shot scenarios with minimal forgetting.
arXiv Detail & Related papers (2024-04-11T04:22:15Z) - DELTA: Decoupling Long-Tailed Online Continual Learning [7.507868991415516]
Long-Tailed Online Continual Learning (LTOCL) aims to learn new tasks from sequentially arriving class-imbalanced data streams.
We present DELTA, a decoupled learning approach designed to enhance learning representations.
We demonstrate that DELTA improves the capacity for incremental learning, surpassing existing OCL methods.
arXiv Detail & Related papers (2024-04-06T02:33:04Z) - Iterative Forward Tuning Boosts In-Context Learning in Language Models [88.25013390669845]
In this study, we introduce a novel two-stage framework to boost in-context learning in large language models (LLMs)
Specifically, our framework delineates the ICL process into two distinct stages: Deep-Thinking and test stages.
The Deep-Thinking stage incorporates a unique attention mechanism, i.e., iterative enhanced attention, which enables multiple rounds of information accumulation.
arXiv Detail & Related papers (2023-05-22T13:18:17Z) - CLAD: A realistic Continual Learning benchmark for Autonomous Driving [33.95470797472666]
This paper describes the design and the ideas motivating a new Continual Learning benchmark for Autonomous Driving.
The benchmark uses SODA10M, a recently released large-scale dataset that concerns autonomous driving related problems.
We introduce CLAD-C, an online classification benchmark realised through a chronological data stream that poses both class and domain incremental challenges.
We examine the inherent difficulties and challenges posed by the benchmark, through a survey of the techniques and methods used by the top-3 participants in a CLAD-challenge workshop at ICCV 2021.
arXiv Detail & Related papers (2022-10-07T12:08:25Z) - Two-Stream Consensus Network: Submission to HACS Challenge 2021
Weakly-Supervised Learning Track [78.64815984927425]
The goal of weakly-supervised temporal action localization is to temporally locate and classify action of interest in untrimmed videos.
We adopt the two-stream consensus network (TSCN) as the main framework in this challenge.
Our solution ranked 2rd in this challenge, and we hope our method can serve as a baseline for future academic research.
arXiv Detail & Related papers (2021-06-21T03:36:36Z) - Continual Learning From Unlabeled Data Via Deep Clustering [7.704949298975352]
Continual learning aims to learn new tasks incrementally using less computation and memory resources instead of retraining the model from scratch whenever new task arrives.
We introduce a new framework to make continual learning feasible in unsupervised mode by using pseudo label obtained from cluster assignments to update model.
arXiv Detail & Related papers (2021-04-14T23:46:17Z) - Incremental Embedding Learning via Zero-Shot Translation [65.94349068508863]
Current state-of-the-art incremental learning methods tackle catastrophic forgetting problem in traditional classification networks.
We propose a novel class-incremental method for embedding network, named as zero-shot translation class-incremental method (ZSTCI)
In addition, ZSTCI can easily be combined with existing regularization-based incremental learning methods to further improve performance of embedding networks.
arXiv Detail & Related papers (2020-12-31T08:21:37Z) - Bilevel Continual Learning [76.50127663309604]
We present a novel framework of continual learning named "Bilevel Continual Learning" (BCL)
Our experiments on continual learning benchmarks demonstrate the efficacy of the proposed BCL compared to many state-of-the-art methods.
arXiv Detail & Related papers (2020-07-30T16:00:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.