Reinforced Curriculum Pre-Alignment for Domain-Adaptive VLMs
- URL: http://arxiv.org/abs/2602.10740v1
- Date: Wed, 11 Feb 2026 11:04:37 GMT
- Title: Reinforced Curriculum Pre-Alignment for Domain-Adaptive VLMs
- Authors: Yuming Yan, Shuo Yang, Kai Tang, Sihong Chen, Yang Zhang, Ke Xu, Dan Hu, Qun Yu, Pengfei Hu, Edith C. H. Ngai,
- Abstract summary: Vision-Language Models (VLMs) demonstrate remarkable general-purpose capabilities but often fall short in specialized domains.<n>We propose Reinforced Curriculum Pre-Alignment (RCPA), a novel post-training paradigm that introduces a curriculum-aware progressive modulation mechanism.
- Score: 21.190823331753464
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vision-Language Models (VLMs) demonstrate remarkable general-purpose capabilities but often fall short in specialized domains such as medical imaging or geometric problem-solving. Supervised Fine-Tuning (SFT) can enhance performance within a target domain, but it typically causes catastrophic forgetting, limiting its generalization. The central challenge, therefore, is to adapt VLMs to new domains while preserving their general-purpose capabilities. Continual pretraining is effective for expanding knowledge in Large Language Models (LLMs), but it is less feasible for VLMs due to prohibitive computational costs and the unavailability of pretraining data for most open-source models. This necessitates efficient post-training adaptation methods. Reinforcement learning (RL)-based approaches such as Group Relative Policy Optimization (GRPO) have shown promise in preserving general abilities, yet they often fail in domain adaptation scenarios where the model initially lacks sufficient domain knowledge, leading to optimization collapse. To bridge this gap, we propose Reinforced Curriculum Pre-Alignment (RCPA), a novel post-training paradigm that introduces a curriculum-aware progressive modulation mechanism. In the early phase, RCPA applies partial output constraints to safely expose the model to new domain concepts. As the model's domain familiarity increases, training gradually transitions to full generation optimization, refining responses and aligning them with domain-specific preferences. This staged adaptation balances domain knowledge acquisition with the preservation of general multimodal capabilities. Extensive experiments across specialized domains and general benchmarks validate the effectiveness of RCPA, establishing a practical pathway toward building high-performing and domain-adaptive VLMs.
Related papers
- Steering Vision-Language Pre-trained Models for Incremental Face Presentation Attack Detection [62.89126207012712]
Face Presentation Attack Detection (PAD) demands incremental learning to combat spoofing tactics and domains.<n>Privacy regulations forbid retaining past data, necessitating rehearsal-free learning (RF-IL)
arXiv Detail & Related papers (2025-12-22T04:30:11Z) - Set Pivot Learning: Redefining Generalized Segmentation with Vision Foundation Models [15.321114178936554]
We introduce the concept of Set Pivot Learning, a paradigm shift that redefines domain generalization (DG) based on Vision Foundation Models (VFMs)<n>Traditional DG assumes that the target domain is inaccessible during training, but the emergence of VFMs renders this assumption unclear and obsolete.<n>We propose Set Pivot Learning (SPL), a new definition of domain migration task based on VFMs, which is more suitable for current research and application requirements.
arXiv Detail & Related papers (2025-08-03T04:20:35Z) - Exploring Probabilistic Modeling Beyond Domain Generalization for Semantic Segmentation [37.724608645202466]
Domain Generalized Semantic (DGSS) is a critical yet challenging task, as domain shifts in unseen environments can severely compromise model performance.<n>This paper introduces PDAF, a Probabilistic Diffusion Alignment Framework that enhances the generalization of existing segmentation networks.<n>Experiments validate the effectiveness of PDAF across diverse and challenging urban scenes.
arXiv Detail & Related papers (2025-07-28T22:27:58Z) - ixi-GEN: Efficient Industrial sLLMs through Domain Adaptive Continual Pretraining [3.976980328606434]
Open-source large language models (LLMs) have expanded opportunities for enterprise applications.<n>Many organizations still lack the infrastructure to deploy and maintain large-scale models.<n>Small large language models (sLLMs) have become a practical alternative despite inherent performance limitations.
arXiv Detail & Related papers (2025-07-09T12:30:42Z) - Adversarial Data Augmentation for Single Domain Generalization via Lyapunov Exponent-Guided Optimization [6.619253289031494]
Single Domain Generalization aims to develop models capable of generalizing to unseen target domains using only one source domain.<n>We propose LEAwareSGD, a novel Lyapunov Exponent (LE)-guided optimization approach inspired by dynamical systems theory.<n>Experiments on PACS, OfficeHome, and DomainNet demonstrate that LEAwareSGD yields substantial generalization gains.
arXiv Detail & Related papers (2025-07-06T09:03:08Z) - General-Reasoner: Advancing LLM Reasoning Across All Domains [64.70599911897595]
Reinforcement learning (RL) has recently demonstrated strong potential in enhancing the reasoning capabilities of large language models (LLMs)<n>We propose General-Reasoner, a novel training paradigm designed to enhance LLM reasoning capabilities across diverse domains.<n>We train a series of models and evaluate them on a wide range of datasets covering wide domains like physics, chemistry, finance, electronics etc.
arXiv Detail & Related papers (2025-05-20T17:41:33Z) - Demystifying Domain-adaptive Post-training for Financial LLMs [87.28855088465197]
FINDAP is a systematic and fine-grained investigation into domain-adaptive post-training of large language models.<n>Our approach consists of four key components: FinCap, FinRec, FinTrain and FinEval.<n>The resulting model, Llama-Fin, achieves, state-of-the-art performance across a wide range of financial tasks.
arXiv Detail & Related papers (2025-01-09T04:26:15Z) - Enhancing Robustness of Vision-Language Models through Orthogonality Learning and Self-Regularization [77.62516752323207]
We introduce an orthogonal fine-tuning method for efficiently fine-tuning pretrained weights and enabling enhanced robustness and generalization.
A self-regularization strategy is further exploited to maintain the stability in terms of zero-shot generalization of VLMs, dubbed OrthSR.
For the first time, we revisit the CLIP and CoOp with our method to effectively improve the model on few-shot image classficiation scenario.
arXiv Detail & Related papers (2024-07-11T10:35:53Z) - Investigating Continual Pretraining in Large Language Models: Insights and Implications [9.660013084324817]
Continual learning in large language models (LLMs) is an evolving domain that focuses on developing efficient and sustainable training strategies.<n>We introduce a new benchmark designed to measure the adaptability of LLMs to changing pretraining data landscapes.<n>Our findings uncover several key insights: (i) continual pretraining consistently improves 1.5B models studied in this work and is also superior to domain adaptation, (ii) larger models always achieve better perplexity than smaller ones when continually pretrained on the same corpus, (iii) smaller models are particularly sensitive to continual pretraining, showing the most significant rates of both learning and
arXiv Detail & Related papers (2024-02-27T10:47:24Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - Unsupervised Domain Generalization for Person Re-identification: A
Domain-specific Adaptive Framework [50.88463458896428]
Domain generalization (DG) has attracted much attention in person re-identification (ReID) recently.
Existing methods usually need the source domains to be labeled, which could be a significant burden for practical ReID tasks.
We propose a simple and efficient domain-specific adaptive framework, and realize it with an adaptive normalization module.
arXiv Detail & Related papers (2021-11-30T02:35:51Z) - Universal Source-Free Domain Adaptation [57.37520645827318]
We propose a novel two-stage learning process for domain adaptation.
In the Procurement stage, we aim to equip the model for future source-free deployment, assuming no prior knowledge of the upcoming category-gap and domain-shift.
In the Deployment stage, the goal is to design a unified adaptation algorithm capable of operating across a wide range of category-gaps.
arXiv Detail & Related papers (2020-04-09T07:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.