DDP: Dual-Decoupled Prompting for Multi-Label Class-Incremental Learning
- URL: http://arxiv.org/abs/2509.23335v1
- Date: Sat, 27 Sep 2025 14:39:43 GMT
- Title: DDP: Dual-Decoupled Prompting for Multi-Label Class-Incremental Learning
- Authors: Kaile Du, Zihan Ye, Junzhou Xie, Fan Lyu, Yixi Shen, Yuyang Li, Miaoxuan Zhu, Fuyuan Hu, Ling Shao, Guangcan Liu,
- Abstract summary: We propose Dual-Decoupled Prompting (DDP) as a replay-free and parameter-efficient framework for class-incremental learning.<n>DDP addresses semantic confusion from co-occurring categories and true-negative-false-positive confusion caused by partial labeling.<n>It is the first replay-free MLCIL approach to exceed 80% mAP and 70% F1 under the standard MS-COCO B40-C10 benchmark.
- Score: 37.76339545010501
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prompt-based methods have shown strong effectiveness in single-label class-incremental learning, but their direct extension to multi-label class-incremental learning (MLCIL) performs poorly due to two intrinsic challenges: semantic confusion from co-occurring categories and true-negative-false-positive confusion caused by partial labeling. We propose Dual-Decoupled Prompting (DDP), a replay-free and parameter-efficient framework that explicitly addresses both issues. DDP assigns class-specific positive-negative prompts to disentangle semantics and introduces Progressive Confidence Decoupling (PCD), a curriculum-inspired decoupling strategy that suppresses false positives. Past prompts are frozen as knowledge anchors, and interlayer prompting enhances efficiency. On MS-COCO and PASCAL VOC, DDP consistently outperforms prior methods and is the first replay-free MLCIL approach to exceed 80% mAP and 70% F1 under the standard MS-COCO B40-C10 benchmark.
Related papers
- Beyond Prompt Degradation: Prototype-guided Dual-pool Prompting for Incremental Object Detection [18.985709082532992]
We propose a novel prompt-decoupled framework called PDP.<n>It explicitly separates task-general and task-specific prompts, preventing interference between prompts and mitigating prompt coupling.<n>It achieves state-of-the-art performance on MS-COCO and PASCAL VOC benchmarks, highlighting its potential in balancing stability and plasticity.
arXiv Detail & Related papers (2026-03-02T12:09:38Z) - FDBPL: Faster Distillation-Based Prompt Learning for Region-Aware Vision-Language Models Adaptation [17.51747913191231]
We propose large textbfFaster large textbfDistillation-large textbfBased large textbfPrompt large textbfLL (textbfFDBPL)<n>It addresses issues by sharing soft supervision contexts across multiple training stages and implementing accelerated I/O. Comprehensive evaluations across 11 datasets demonstrate superior performance in base-to-new generalization, cross-dataset transfer, and robustness tests, achieving $2.2times$ faster training speed.
arXiv Detail & Related papers (2025-05-23T15:57:16Z) - Dual-Label Learning With Irregularly Present Labels [15.701169587084047]
In multi-task learning, labels are often missing irregularly across samples, which can be fully labeled, partially labeled or unlabeled.<n>This work focuses on the two-label learning task and proposes a novel training and inference framework, Dual-DLL Learning (DLL)<n>DLL features a dual-tower model architecture that allows for explicit information exchange between labels, aimed at maximizing the utility of partially available labels.
arXiv Detail & Related papers (2024-10-18T11:07:26Z) - Enhancing Visual Continual Learning with Language-Guided Supervision [76.38481740848434]
Continual learning aims to empower models to learn new tasks without forgetting previously acquired knowledge.
We argue that the scarce semantic information conveyed by the one-hot labels hampers the effective knowledge transfer across tasks.
Specifically, we use PLMs to generate semantic targets for each class, which are frozen and serve as supervision signals.
arXiv Detail & Related papers (2024-03-24T12:41:58Z) - Appeal: Allow Mislabeled Samples the Chance to be Rectified in Partial Label Learning [55.4510979153023]
In partial label learning (PLL), each instance is associated with a set of candidate labels among which only one is ground-truth.
To help these mislabeled samples "appeal," we propose the first appeal-based framework.
arXiv Detail & Related papers (2023-12-18T09:09:52Z) - D2CSE: Difference-aware Deep continuous prompts for Contrastive Sentence
Embeddings [3.04585143845864]
This paper describes Difference-aware Deep continuous prompt for Contrastive Sentence Embeddings (D2CSE) that learns sentence embeddings.
Compared to state-of-the-art approaches, D2CSE computes sentence vectors that are exceptional to distinguish a subtle difference in similar sentences.
arXiv Detail & Related papers (2023-04-18T13:45:07Z) - Unreliable Partial Label Learning with Recursive Separation [44.901941653899264]
Unreliable Partial Label Learning (UPLL) is proposed, in which the true label may not be in the candidate label set.
We propose a two-stage framework named Unreliable Partial Label Learning with Recursive Separation (UPLLRS)
Our method demonstrates state-of-the-art performance as evidenced by experimental results.
arXiv Detail & Related papers (2023-02-20T10:39:31Z) - Few-shot Action Recognition with Prototype-centered Attentive Learning [88.10852114988829]
Prototype-centered Attentive Learning (PAL) model composed of two novel components.
First, a prototype-centered contrastive learning loss is introduced to complement the conventional query-centered learning objective.
Second, PAL integrates a attentive hybrid learning mechanism that can minimize the negative impacts of outliers.
arXiv Detail & Related papers (2021-01-20T11:48:12Z) - Two-phase Pseudo Label Densification for Self-training based Domain
Adaptation [93.03265290594278]
We propose a novel Two-phase Pseudo Label Densification framework, referred to as TPLD.
In the first phase, we use sliding window voting to propagate the confident predictions, utilizing intrinsic spatial-correlations in the images.
In the second phase, we perform a confidence-based easy-hard classification.
To ease the training process and avoid noisy predictions, we introduce the bootstrapping mechanism to the original self-training loss.
arXiv Detail & Related papers (2020-12-09T02:35:25Z) - Provably Consistent Partial-Label Learning [120.4734093544867]
Partial-label learning (PLL) is a multi-class classification problem, where each training example is associated with a set of candidate labels.
In this paper, we propose the first generation model of candidate label sets, and develop two novel methods that are guaranteed to be consistent.
Experiments on benchmark and real-world datasets validate the effectiveness of the proposed generation model and two methods.
arXiv Detail & Related papers (2020-07-17T12:19:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.