STEP: Stepwise Curriculum Learning for Context-Knowledge Fusion in Conversational Recommendation
- URL: http://arxiv.org/abs/2508.10669v1
- Date: Thu, 14 Aug 2025 14:08:21 GMT
- Title: STEP: Stepwise Curriculum Learning for Context-Knowledge Fusion in Conversational Recommendation
- Authors: Zhenye Yang, Jinpeng Chen, Huan Li, Xiongnan Jin, Xuanyang Li, Junwei Zhang, Hongbo Gao, Kaimin Wei, Senzhang Wang,
- Abstract summary: We introduce STEP, a conversational recommender centered on pre-trained language models.<n> STEP combines curriculum-guided context-knowledge fusion with lightweight task-specific prompt tuning.<n> Experimental results show that STEP outperforms mainstream methods in the precision of recommendation and dialogue quality in two public datasets.
- Score: 18.833994388759326
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conversational recommender systems (CRSs) aim to proactively capture user preferences through natural language dialogue and recommend high-quality items. To achieve this, CRS gathers user preferences via a dialog module and builds user profiles through a recommendation module to generate appropriate recommendations. However, existing CRS faces challenges in capturing the deep semantics of user preferences and dialogue context. In particular, the efficient integration of external knowledge graph (KG) information into dialogue generation and recommendation remains a pressing issue. Traditional approaches typically combine KG information directly with dialogue content, which often struggles with complex semantic relationships, resulting in recommendations that may not align with user expectations. To address these challenges, we introduce STEP, a conversational recommender centered on pre-trained language models that combines curriculum-guided context-knowledge fusion with lightweight task-specific prompt tuning. At its heart, an F-Former progressively aligns the dialogue context with knowledge-graph entities through a three-stage curriculum, thus resolving fine-grained semantic mismatches. The fused representation is then injected into the frozen language model via two minimal yet adaptive prefix prompts: a conversation prefix that steers response generation toward user intent and a recommendation prefix that biases item ranking toward knowledge-consistent candidates. This dual-prompt scheme allows the model to share cross-task semantics while respecting the distinct objectives of dialogue and recommendation. Experimental results show that STEP outperforms mainstream methods in the precision of recommendation and dialogue quality in two public datasets.
Related papers
- Integrating Vision-Centric Text Understanding for Conversational Recommender Systems [61.731947296510164]
STARCRS is a Screen-Text-AwaRe Conversational Recommender System.<n>We propose a knowledge-anchored fusion framework that combines contrastive alignment, cross-attention interaction, and adaptive gating.<n>Experiments on two widely used benchmarks demonstrate that STARCRS consistently improves both recommendation accuracy and generated response quality.
arXiv Detail & Related papers (2026-01-20T01:41:54Z) - On Mitigating Data Sparsity in Conversational Recommender Systems [69.70761335240738]
Conversational recommender systems (CRSs) capture user preference through textual information in dialogues.<n>They suffer from data sparsity on two fronts: the dialogue space is vast and linguistically diverse, while the item space exhibits long-tail and sparse distributions.<n>Existing methods struggle with (1) generalizing to varied dialogue expressions due to underutilization of rich textual cues, and (2) learning informative item representations under severe sparsity.
arXiv Detail & Related papers (2025-07-01T06:54:51Z) - Beyond Whole Dialogue Modeling: Contextual Disentanglement for Conversational Recommendation [22.213312621287482]
This paper proposes a novel model to introduce contextual disentanglement for improving conversational recommender systems.<n>DisenCRS employs a dual disentanglement framework, including self-supervised contrastive disentanglement and counterfactual inference disentanglement.<n> Experimental results on two widely used public datasets demonstrate that DisenCRS significantly outperforms existing conversational recommendation models.
arXiv Detail & Related papers (2025-04-24T10:33:26Z) - Parameter-Efficient Conversational Recommender System as a Language
Processing Task [52.47087212618396]
Conversational recommender systems (CRS) aim to recommend relevant items to users by eliciting user preference through natural language conversation.
Prior work often utilizes external knowledge graphs for items' semantic information, a language model for dialogue generation, and a recommendation module for ranking relevant items.
In this paper, we represent items in natural language and formulate CRS as a natural language processing task.
arXiv Detail & Related papers (2024-01-25T14:07:34Z) - Variational Reasoning over Incomplete Knowledge Graphs for
Conversational Recommendation [48.70062671767362]
We propose the Variational Reasoning over Incomplete KGs Conversational Recommender (VRICR)
Our key idea is to incorporate the large dialogue corpus naturally accompanied with CRSs to enhance the incomplete KGs.
We also denote the dialogue-specific subgraphs of KGs as latent variables with categorical priors for adaptive knowledge graphs.
arXiv Detail & Related papers (2022-12-22T17:02:21Z) - Customized Conversational Recommender Systems [45.84713970070487]
Conversational recommender systems (CRS) aim to capture user's current intentions and provide recommendations through real-time multi-turn conversational interactions.
We propose a novel CRS model, coined Customized Conversational Recommender System ( CCRS), which customizes CRS model for users from three perspectives.
To provide personalized recommendations, we extract user's current fine-grained intentions from dialogue context with the guidance of user's inherent preferences.
arXiv Detail & Related papers (2022-06-30T09:45:36Z) - Towards Unified Conversational Recommender Systems via
Knowledge-Enhanced Prompt Learning [89.64215566478931]
Conversational recommender systems (CRS) aim to proactively elicit user preference and recommend high-quality items through natural language conversations.
To develop an effective CRS, it is essential to seamlessly integrate the two modules.
We propose a unified CRS model named UniCRS based on knowledge-enhanced prompt learning.
arXiv Detail & Related papers (2022-06-19T09:21:27Z) - Improving Conversational Recommender Systems via Knowledge Graph based
Semantic Fusion [77.21442487537139]
Conversational recommender systems (CRS) aim to recommend high-quality items to users through interactive conversations.
First, the conversation data itself lacks of sufficient contextual information for accurately understanding users' preference.
Second, there is a semantic gap between natural language expression and item-level user preference.
arXiv Detail & Related papers (2020-07-08T11:14:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.