BSDP: Brain-inspired Streaming Dual-level Perturbations for Online Open
World Object Detection
- URL: http://arxiv.org/abs/2403.02637v1
- Date: Tue, 5 Mar 2024 04:00:50 GMT
- Title: BSDP: Brain-inspired Streaming Dual-level Perturbations for Online Open
World Object Detection
- Authors: Yu Chen, Liyan Ma, Liping Jing, Jian Yu
- Abstract summary: We aim to make deep learning models simulate the way people learn.
Existing OWOD approaches pay more attention to the identification of unknown categories, while the incremental learning part is also very important.
In this paper, we take the dual-level information of old samples as perturbations on new samples to make the model good at learning new knowledge without forgetting the old knowledge.
- Score: 31.467501311528498
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans can easily distinguish the known and unknown categories and can
recognize the unknown object by learning it once instead of repeating it many
times without forgetting the learned object. Hence, we aim to make deep
learning models simulate the way people learn. We refer to such a learning
manner as OnLine Open World Object Detection(OLOWOD). Existing OWOD approaches
pay more attention to the identification of unknown categories, while the
incremental learning part is also very important. Besides, some neuroscience
research shows that specific noises allow the brain to form new connections and
neural pathways which may improve learning speed and efficiency. In this paper,
we take the dual-level information of old samples as perturbations on new
samples to make the model good at learning new knowledge without forgetting the
old knowledge. Therefore, we propose a simple plug-and-play method, called
Brain-inspired Streaming Dual-level Perturbations(BSDP), to solve the OLOWOD
problem. Specifically, (1) we first calculate the prototypes of previous
categories and use the distance between samples and the prototypes as the
sample selecting strategy to choose old samples for replay; (2) then take the
prototypes as the streaming feature-level perturbations of new samples, so as
to improve the plasticity of the model through revisiting the old knowledge;
(3) and also use the distribution of the features of the old category samples
to generate adversarial data in the form of streams as the data-level
perturbations to enhance the robustness of the model to new categories. We
empirically evaluate BSDP on PASCAL VOC and MS-COCO, and the excellent results
demonstrate the promising performance of our proposed method and learning
manner.
Related papers
- Towards Non-Exemplar Semi-Supervised Class-Incremental Learning [33.560003528712414]
Class-incremental learning aims to gradually recognize new classes while maintaining the discriminability of old ones.
We propose a non-exemplar semi-supervised CIL framework with contrastive learning and semi-supervised incremental prototype classifier (Semi-IPC)
Semi-IPC learns a prototype for each class with unsupervised regularization, enabling the model to incrementally learn from partially labeled new data.
arXiv Detail & Related papers (2024-03-27T06:28:19Z) - Class incremental learning with probability dampening and cascaded gated classifier [4.285597067389559]
We propose a novel incremental regularisation approach called Margin Dampening and Cascaded Scaling.
The first combines a soft constraint and a knowledge distillation approach to preserve past knowledge while allowing forgetting new patterns.
We empirically show that our approach performs well on multiple benchmarks well-established baselines.
arXiv Detail & Related papers (2024-02-02T09:33:07Z) - Continual Learning with Pre-Trained Models: A Survey [61.97613090666247]
Continual Learning aims to overcome the catastrophic forgetting of former knowledge when learning new ones.
This paper presents a comprehensive survey of the latest advancements in PTM-based CL.
arXiv Detail & Related papers (2024-01-29T18:27:52Z) - Learning Prompt with Distribution-Based Feature Replay for Few-Shot Class-Incremental Learning [56.29097276129473]
We propose a simple yet effective framework, named Learning Prompt with Distribution-based Feature Replay (LP-DiF)
To prevent the learnable prompt from forgetting old knowledge in the new session, we propose a pseudo-feature replay approach.
When progressing to a new session, pseudo-features are sampled from old-class distributions combined with training images of the current session to optimize the prompt.
arXiv Detail & Related papers (2024-01-03T07:59:17Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - Online Deep Metric Learning via Mutual Distillation [9.363111089877625]
Deep metric learning aims to transform input data into an embedding space, where similar samples are close while dissimilar samples are far apart from each other.
Existing solutions either retrain the model from scratch or require the replay of old samples during the training.
This paper proposes a complete online deep metric learning framework based on mutual distillation for both one-task and multi-task scenarios.
arXiv Detail & Related papers (2022-03-10T07:24:36Z) - Two-Level Residual Distillation based Triple Network for Incremental
Object Detection [21.725878050355824]
We propose a novel incremental object detector based on Faster R-CNN to continuously learn from new object classes without using old data.
It is a triple network where an old model and a residual model as assistants for helping the incremental model learning on new classes without forgetting the previous learned knowledge.
arXiv Detail & Related papers (2020-07-27T11:04:57Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - Self-Supervised Learning Aided Class-Incremental Lifelong Learning [17.151579393716958]
We study the issue of catastrophic forgetting in class-incremental learning (Class-IL)
In training procedure of Class-IL, as the model has no knowledge about following tasks, it would only extract features necessary for tasks learned so far, whose information is insufficient for joint classification.
We propose to combine self-supervised learning, which can provide effective representations without requiring labels, with Class-IL to partly get around this problem.
arXiv Detail & Related papers (2020-06-10T15:15:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.