COOLer: Class-Incremental Learning for Appearance-Based Multiple Object
Tracking
- URL: http://arxiv.org/abs/2310.03006v2
- Date: Thu, 5 Oct 2023 05:54:34 GMT
- Title: COOLer: Class-Incremental Learning for Appearance-Based Multiple Object
Tracking
- Authors: Zhizheng Liu, Mattia Segu, Fisher Yu
- Abstract summary: This paper extends the scope of continual learning research to class-incremental learning for multiple object tracking (MOT)
Previous solutions for continual learning of object detectors do not address the data association stage of appearance-based trackers.
We introduce COOLer, a COntrastive- and cOntinual-Learning-based tracker, which incrementally learns to track new categories while preserving past knowledge.
- Score: 32.47215340215641
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continual learning allows a model to learn multiple tasks sequentially while
retaining the old knowledge without the training data of the preceding tasks.
This paper extends the scope of continual learning research to
class-incremental learning for multiple object tracking (MOT), which is
desirable to accommodate the continuously evolving needs of autonomous systems.
Previous solutions for continual learning of object detectors do not address
the data association stage of appearance-based trackers, leading to
catastrophic forgetting of previous classes' re-identification features. We
introduce COOLer, a COntrastive- and cOntinual-Learning-based tracker, which
incrementally learns to track new categories while preserving past knowledge by
training on a combination of currently available ground truth labels and
pseudo-labels generated by the past tracker. To further exacerbate the
disentanglement of instance representations, we introduce a novel contrastive
class-incremental instance representation learning technique. Finally, we
propose a practical evaluation protocol for continual learning for MOT and
conduct experiments on the BDD100K and SHIFT datasets. Experimental results
demonstrate that COOLer continually learns while effectively addressing
catastrophic forgetting of both tracking and detection. The code is available
at https://github.com/BoSmallEar/COOLer.
Related papers
- FACT: Feature Adaptive Continual-learning Tracker for Multiple Object Tracking [22.53374351982883]
We propose a new MOT framework called the Feature Adaptive Continual-learning Tracker (FACT)
FACT enables real-time tracking and feature learning for targets by utilizing all past tracking information.
We demonstrate that the framework can be integrated with various state-of-the-art feature-based trackers.
arXiv Detail & Related papers (2024-09-12T10:14:48Z) - Exploiting the Semantic Knowledge of Pre-trained Text-Encoders for Continual Learning [70.64617500380287]
Continual learning allows models to learn from new data while retaining previously learned knowledge.
The semantic knowledge available in the label information of the images, offers important semantic information that can be related with previously acquired knowledge of semantic classes.
We propose integrating semantic guidance within and across tasks by capturing semantic similarity using text embeddings.
arXiv Detail & Related papers (2024-08-02T07:51:44Z) - Beyond Prompt Learning: Continual Adapter for Efficient Rehearsal-Free Continual Learning [22.13331870720021]
We propose a beyond prompt learning approach to the RFCL task, called Continual Adapter (C-ADA)
C-ADA flexibly extends specific weights in CAL to learn new knowledge for each task and freezes old weights to preserve prior knowledge.
Our approach achieves significantly improved performance and training speed, outperforming the current state-of-the-art (SOTA) method.
arXiv Detail & Related papers (2024-07-14T17:40:40Z) - Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - Incremental Object Detection with CLIP [36.478530086163744]
We propose a visual-language model such as CLIP to generate text feature embeddings for different class sets.
We then employ super-classes to replace the unavailable novel classes in the early learning stage to simulate the incremental scenario.
We incorporate the finely recognized detection boxes as pseudo-annotations into the training process, thereby further improving the detection performance.
arXiv Detail & Related papers (2023-10-13T01:59:39Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - Contrastive Learning with Boosted Memorization [36.957895270908324]
Self-supervised learning has achieved a great success in the representation learning of visual and textual data.
Recent attempts to consider self-supervised long-tailed learning are made by rebalancing in the loss perspective or the model perspective.
We propose a novel Boosted Contrastive Learning (BCL) method to enhance the long-tailed learning in the label-unaware context.
arXiv Detail & Related papers (2022-05-25T11:54:22Z) - Online Continual Learning with Natural Distribution Shifts: An Empirical
Study with Visual Data [101.6195176510611]
"Online" continual learning enables evaluating both information retention and online learning efficacy.
In online continual learning, each incoming small batch of data is first used for testing and then added to the training set, making the problem truly online.
We introduce a new benchmark for online continual visual learning that exhibits large scale and natural distribution shifts.
arXiv Detail & Related papers (2021-08-20T06:17:20Z) - Bilevel Continual Learning [76.50127663309604]
We present a novel framework of continual learning named "Bilevel Continual Learning" (BCL)
Our experiments on continual learning benchmarks demonstrate the efficacy of the proposed BCL compared to many state-of-the-art methods.
arXiv Detail & Related papers (2020-07-30T16:00:23Z) - A Survey on Self-supervised Pre-training for Sequential Transfer
Learning in Neural Networks [1.1802674324027231]
Self-supervised pre-training for transfer learning is becoming an increasingly popular technique to improve state-of-the-art results using unlabeled data.
We provide an overview of the taxonomy for self-supervised learning and transfer learning, and highlight some prominent methods for designing pre-training tasks across different domains.
arXiv Detail & Related papers (2020-07-01T22:55:48Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.