Teach YOLO to Remember: A Self-Distillation Approach for Continual Object Detection
- URL: http://arxiv.org/abs/2503.04688v1
- Date: Thu, 06 Mar 2025 18:31:41 GMT
- Title: Teach YOLO to Remember: A Self-Distillation Approach for Continual Object Detection
- Authors: Riccardo De Monte, Davide Dalle Pezze, Gian Antonio Susto,
- Abstract summary: Real-time object detectors like YOLO achieve exceptional performance when trained on large datasets for multiple epochs.<n>In real-world scenarios where data arrives incrementally, neural networks suffer from catastrophic forgetting.<n>We introduce YOLO LwF, a self-distillation approach tailored for YOLO-based continual object detection.
- Score: 5.6148728159802035
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Real-time object detectors like YOLO achieve exceptional performance when trained on large datasets for multiple epochs. However, in real-world scenarios where data arrives incrementally, neural networks suffer from catastrophic forgetting, leading to a loss of previously learned knowledge. To address this, prior research has explored strategies for Class Incremental Learning (CIL) in Continual Learning for Object Detection (CLOD), with most approaches focusing on two-stage object detectors. However, existing work suggests that Learning without Forgetting (LwF) may be ineffective for one-stage anchor-free detectors like YOLO due to noisy regression outputs, which risk transferring corrupted knowledge. In this work, we introduce YOLO LwF, a self-distillation approach tailored for YOLO-based continual object detection. We demonstrate that when coupled with a replay memory, YOLO LwF significantly mitigates forgetting. Compared to previous approaches, it achieves state-of-the-art performance, improving mAP by +2.1% and +2.9% on the VOC and COCO benchmarks, respectively.
Related papers
- CLDA-YOLO: Visual Contrastive Learning Based Domain Adaptive YOLO Detector [10.419327930845922]
Unsupervised domain adaptive (UDA) algorithms can markedly enhance the performance of object detectors under conditions of domain shifts.<n>We present an unsupervised domain adaptive YOLO detector based on visual contrastive learning (CLDA-YOLO)
arXiv Detail & Related papers (2024-12-16T14:25:52Z) - SEEKR: Selective Attention-Guided Knowledge Retention for Continual Learning of Large Language Models [27.522743690956315]
We propose a SElective attEntion-guided Knowledge Retention method (SEEKR) for data-efficient replay-based continual learning of large language models (LLMs)
SEEKR performs attention distillation on the selected attention heads for finer-grained knowledge retention.
Experimental results on two continual learning benchmarks for LLMs demonstrate the superiority of SEEKR over the existing methods on both performance and efficiency.
arXiv Detail & Related papers (2024-11-09T13:02:36Z) - ALTBI: Constructing Improved Outlier Detection Models via Optimization of Inlier-Memorization Effect [2.3961612657966946]
Outlier detection (OD) is the task of identifying unusual observations (or outliers) from a given or upcoming data.
Inlier-memorization (IM) effect suggests that generative models memorize inliers before outliers in early learning stages.
We propose a theoretically principled method to address UOD tasks by maximally utilizing the IM effect.
arXiv Detail & Related papers (2024-08-19T08:40:53Z) - YOLOv10: Real-Time End-to-End Object Detection [68.28699631793967]
YOLOs have emerged as the predominant paradigm in the field of real-time object detection.
The reliance on the non-maximum suppression (NMS) for post-processing hampers the end-to-end deployment of YOLOs.
We introduce the holistic efficiency-accuracy driven model design strategy for YOLOs.
arXiv Detail & Related papers (2024-05-23T11:44:29Z) - YOLO-World: Real-Time Open-Vocabulary Object Detection [87.08732047660058]
We introduce YOLO-World, an innovative approach that enhances YOLO with open-vocabulary detection capabilities.
Our method excels in detecting a wide range of objects in a zero-shot manner with high efficiency.
YOLO-World achieves 35.4 AP with 52.0 FPS on V100, which outperforms many state-of-the-art methods in terms of both accuracy and speed.
arXiv Detail & Related papers (2024-01-30T18:59:38Z) - YOLO-MS: Rethinking Multi-Scale Representation Learning for Real-time Object Detection [63.36722419180875]
We provide an efficient and performant object detector, termed YOLO-MS.<n>We train our YOLO-MS on the MS COCO dataset from scratch without relying on any other large-scale datasets.<n>Our work can also serve as a plug-and-play module for other YOLO models.
arXiv Detail & Related papers (2023-08-10T10:12:27Z) - A lightweight and accurate YOLO-like network for small target detection
in Aerial Imagery [94.78943497436492]
We present YOLO-S, a simple, fast and efficient network for small target detection.
YOLO-S exploits a small feature extractor based on Darknet20, as well as skip connection, via both bypass and concatenation.
YOLO-S has an 87% decrease of parameter size and almost one half FLOPs of YOLOv3, making practical the deployment for low-power industrial applications.
arXiv Detail & Related papers (2022-04-05T16:29:49Z) - Learning-based Point Cloud Registration for 6D Object Pose Estimation in
the Real World [55.7340077183072]
We tackle the task of estimating the 6D pose of an object from point cloud data.
Recent learning-based approaches to addressing this task have shown great success on synthetic datasets.
We analyze the causes of these failures, which we trace back to the difference between the feature distributions of the source and target point clouds.
arXiv Detail & Related papers (2022-03-29T07:55:04Z) - One-Shot Object Detection without Fine-Tuning [62.39210447209698]
We introduce a two-stage model consisting of a first stage Matching-FCOS network and a second stage Structure-Aware Relation Module.
We also propose novel training strategies that effectively improve detection performance.
Our method exceeds the state-of-the-art one-shot performance consistently on multiple datasets.
arXiv Detail & Related papers (2020-05-08T01:59:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.