Beyond Prompt Degradation: Prototype-guided Dual-pool Prompting for Incremental Object Detection
- URL: http://arxiv.org/abs/2603.02286v1
- Date: Mon, 02 Mar 2026 12:09:38 GMT
- Title: Beyond Prompt Degradation: Prototype-guided Dual-pool Prompting for Incremental Object Detection
- Authors: Yaoteng Zhang, Zhou Qing, Junyu Gao, Qi Wang,
- Abstract summary: We propose a novel prompt-decoupled framework called PDP.<n>It explicitly separates task-general and task-specific prompts, preventing interference between prompts and mitigating prompt coupling.<n>It achieves state-of-the-art performance on MS-COCO and PASCAL VOC benchmarks, highlighting its potential in balancing stability and plasticity.
- Score: 18.985709082532992
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Incremental Object Detection (IOD) aims to continuously learn new object categories without forgetting previously learned ones. Recently, prompt-based methods have gained popularity for their replay-free design and parameter efficiency. However, due to prompt coupling and prompt drift, these methods often suffer from prompt degradation during continual adaptation. To address these issues, we propose a novel prompt-decoupled framework called PDP. PDP innovatively designs a dual-pool prompt decoupling paradigm, which consists of a shared pool used to capture task-general knowledge for forward transfer, and a private pool used to learn task-specific discriminative features. This paradigm explicitly separates task-general and task-specific prompts, preventing interference between prompts and mitigating prompt coupling. In addition, to counteract prompt drift resulting from inconsistent supervision where old foreground objects are treated as background in subsequent tasks, PDP introduces a Prototypical Pseudo-Label Generation (PPG) module. PPG can dynamically update the class prototype space during training and use the class prototypes to further filter valuable pseudo-labels, maintaining supervisory signal consistency throughout the incremental process. PDP achieves state-of-the-art performance on MS-COCO (with a 9.2\% AP improvement) and PASCAL VOC (with a 3.3\% AP improvement) benchmarks, highlighting its potential in balancing stability and plasticity. The code and dataset are released at: https://github.com/zyt95579/PDP\_IOD/tree/main
Related papers
- SENTINEL: Stagewise Integrity Verification for Pipeline Parallel Decentralized Training [54.8494905524997]
Decentralized training introduces critical security risks when executed across untrusted, geographically distributed nodes.<n>We propose SENTINEL, a verification mechanism for pipeline parallelism (PP) training without duplication.<n>Experiments demonstrate successful training of up to 4B- parameter LLMs across untrusted distributed environments with up to 176 workers while maintaining model convergence and performance.
arXiv Detail & Related papers (2026-03-03T23:51:10Z) - GFlowPO: Generative Flow Network as a Language Model Prompt Optimizer [51.31263673158136]
GFlowPO casts prompt search as a posterior inference problem over latent prompts regularized by a meta-prompted reference-LM prior.<n>GFlowPO consistently outperforms recent discrete prompt optimization baselines.
arXiv Detail & Related papers (2026-02-03T10:30:03Z) - Parameterized Prompt for Incremental Object Detection [40.077943384096805]
Existing prompts pool based approaches assume disjoint class sets across incremental tasks.<n>In co-occurring scenarios, unlabeled objects from previous tasks may appear in current task images, leading to confusion in prompts pool.<n>In this paper, we hold that prompt structures should exhibit adaptive consolidation properties across tasks, with constrained updates to prevent catastrophic forgetting.
arXiv Detail & Related papers (2025-10-31T09:41:49Z) - EntroPE: Entropy-Guided Dynamic Patch Encoder for Time Series Forecasting [50.794700596484894]
We propose EntroPE (Entropy-Guided Dynamic Patch), a novel, temporally informed framework that dynamically detects transition points via conditional entropy.<n>This preserves temporal structure while retaining the computational benefits of patching.<n> Experiments across long-term forecasting benchmarks demonstrate that EntroPE improves both accuracy and efficiency.
arXiv Detail & Related papers (2025-09-30T12:09:56Z) - DDP: Dual-Decoupled Prompting for Multi-Label Class-Incremental Learning [37.76339545010501]
We propose Dual-Decoupled Prompting (DDP) as a replay-free and parameter-efficient framework for class-incremental learning.<n>DDP addresses semantic confusion from co-occurring categories and true-negative-false-positive confusion caused by partial labeling.<n>It is the first replay-free MLCIL approach to exceed 80% mAP and 70% F1 under the standard MS-COCO B40-C10 benchmark.
arXiv Detail & Related papers (2025-09-27T14:39:43Z) - Towards Robust Incremental Learning under Ambiguous Supervision [22.9111210739047]
We propose a novel weakly-supervised learning paradigm called Incremental Partial Label Learning (IPLL)<n>IPLL aims to handle sequential fully-supervised learning problems where novel classes emerge from time to time.<n>We develop a memory replay technique that collects well-disambiguated samples while maintaining representativeness and diversity.
arXiv Detail & Related papers (2025-01-23T11:52:53Z) - LW2G: Learning Whether to Grow for Prompt-based Continual Learning [55.552510632228326]
Recent Prompt-based Continual learning has achieved remarkable performance with pre-trained models.<n>These approaches expand a prompt pool by adding a new set of prompts while learning and select the correct set during inference.<n>Previous studies have revealed that learning task-wised prompt sets individually and low selection accuracy pose challenges to the performance of PCL.
arXiv Detail & Related papers (2024-09-27T15:55:13Z) - Bidirectional Decoding: Improving Action Chunking via Guided Test-Time Sampling [51.38330727868982]
We show how action chunking impacts the divergence between a learner and a demonstrator.<n>We propose Bidirectional Decoding (BID), a test-time inference algorithm that bridges action chunking with closed-loop adaptation.<n>Our method boosts the performance of two state-of-the-art generative policies across seven simulation benchmarks and two real-world tasks.
arXiv Detail & Related papers (2024-08-30T15:39:34Z) - Continual Learning for Remote Physiological Measurement: Minimize Forgetting and Simplify Inference [4.913049603343811]
Existing r measurement methods often overlook the incremental learning scenario.
Most existing class incremental learning approaches are unsuitable for r measurement.
We present a novel method named ADDP to tackle continual learning for r measurement.
arXiv Detail & Related papers (2024-07-19T01:49:09Z) - PECTP: Parameter-Efficient Cross-Task Prompts for Incremental Vision Transformer [76.39111896665585]
Incremental Learning (IL) aims to learn deep models on sequential tasks continually.
Recent vast pre-trained models (PTMs) have achieved outstanding performance by prompt technique in practical IL without the old samples.
arXiv Detail & Related papers (2024-07-04T10:37:58Z) - Prompt-augmented Temporal Point Process for Streaming Event Sequence [18.873915278172095]
We present a novel framework for continuous monitoring of a Neural Temporal Point Processes (TPP) model.
PromptTPP consistently achieves state-of-the-art performance across three real user behavior datasets.
arXiv Detail & Related papers (2023-10-08T03:41:16Z) - Steering Prototypes with Prompt-tuning for Rehearsal-free Continual
Learning [47.83442130744575]
Prototypes as representative class embeddings offer advantages in memory conservation and the mitigation of catastrophic forgetting.
In this study, we introduce the Contrastive Prototypical Prompt ( CPP) approach.
CPP achieves a significant 4% to 6% improvement over state-of-the-art methods.
arXiv Detail & Related papers (2023-03-16T16:23:13Z) - Plug-and-Play Few-shot Object Detection with Meta Strategy and Explicit
Localization Inference [78.41932738265345]
This paper proposes a plug detector that can accurately detect the objects of novel categories without fine-tuning process.
We introduce two explicit inferences into the localization process to reduce its dependence on annotated data.
It shows a significant lead in both efficiency, precision, and recall under varied evaluation protocols.
arXiv Detail & Related papers (2021-10-26T03:09:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.