Beyond Whole Dialogue Modeling: Contextual Disentanglement for Conversational Recommendation
- URL: http://arxiv.org/abs/2504.17427v1
- Date: Thu, 24 Apr 2025 10:33:26 GMT
- Title: Beyond Whole Dialogue Modeling: Contextual Disentanglement for Conversational Recommendation
- Authors: Guojia An, Jie Zou, Jiwei Wei, Chaoning Zhang, Fuming Sun, Yang Yang,
- Abstract summary: This paper proposes a novel model to introduce contextual disentanglement for improving conversational recommender systems.<n>DisenCRS employs a dual disentanglement framework, including self-supervised contrastive disentanglement and counterfactual inference disentanglement.<n> Experimental results on two widely used public datasets demonstrate that DisenCRS significantly outperforms existing conversational recommendation models.
- Score: 22.213312621287482
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conversational recommender systems aim to provide personalized recommendations by analyzing and utilizing contextual information related to dialogue. However, existing methods typically model the dialogue context as a whole, neglecting the inherent complexity and entanglement within the dialogue. Specifically, a dialogue comprises both focus information and background information, which mutually influence each other. Current methods tend to model these two types of information mixedly, leading to misinterpretation of users' actual needs, thereby lowering the accuracy of recommendations. To address this issue, this paper proposes a novel model to introduce contextual disentanglement for improving conversational recommender systems, named DisenCRS. The proposed model DisenCRS employs a dual disentanglement framework, including self-supervised contrastive disentanglement and counterfactual inference disentanglement, to effectively distinguish focus information and background information from the dialogue context under unsupervised conditions. Moreover, we design an adaptive prompt learning module to automatically select the most suitable prompt based on the specific dialogue context, fully leveraging the power of large language models. Experimental results on two widely used public datasets demonstrate that DisenCRS significantly outperforms existing conversational recommendation models, achieving superior performance on both item recommendation and response generation tasks.
Related papers
- Harmonizing Large Language Models with Collaborative Behavioral Signals for Conversational Recommendation [20.542601754190073]
This work presents a novel probabilistic framework that synergizes behavioral patterns with conversational interactions through latent preference modeling.<n>The framework first derives latent preference representations through established collaborative filtering techniques, then employs these representations to jointly refine both the linguistic preference expressions and behavioral patterns.
arXiv Detail & Related papers (2025-03-12T09:01:09Z) - 'What are you referring to?' Evaluating the Ability of Multi-Modal
Dialogue Models to Process Clarificational Exchanges [65.03196674816772]
Referential ambiguities arise in dialogue when a referring expression does not uniquely identify the intended referent for the addressee.
Addressees usually detect such ambiguities immediately and work with the speaker to repair it using meta-communicative, Clarification Exchanges (CE): a Clarification Request (CR) and a response.
Here, we argue that the ability to generate and respond to CRs imposes specific constraints on the architecture and objective functions of multi-modal, visually grounded dialogue models.
arXiv Detail & Related papers (2023-07-28T13:44:33Z) - EM Pre-training for Multi-party Dialogue Response Generation [86.25289241604199]
In multi-party dialogues, the addressee of a response utterance should be specified before it is generated.
We propose an Expectation-Maximization (EM) approach that iteratively performs the expectation steps to generate addressee labels.
arXiv Detail & Related papers (2023-05-21T09:22:41Z) - Towards Unified Conversational Recommender Systems via
Knowledge-Enhanced Prompt Learning [89.64215566478931]
Conversational recommender systems (CRS) aim to proactively elicit user preference and recommend high-quality items through natural language conversations.
To develop an effective CRS, it is essential to seamlessly integrate the two modules.
We propose a unified CRS model named UniCRS based on knowledge-enhanced prompt learning.
arXiv Detail & Related papers (2022-06-19T09:21:27Z) - Coreference-Aware Dialogue Summarization [24.986030179701405]
We investigate approaches to explicitly incorporate coreference information in neural abstractive dialogue summarization models.
Experimental results show that the proposed approaches achieve state-of-the-art performance.
Evaluation results on factual correctness suggest such coreference-aware models are better at tracing the information flow among interlocutors.
arXiv Detail & Related papers (2021-06-16T05:18:50Z) - CREAD: Combined Resolution of Ellipses and Anaphora in Dialogues [14.66729951223073]
Anaphora and ellipses are two common phenomena in dialogues.
Traditionally, anaphora is resolved by coreference resolution and ellipses by query rewrite.
We propose a novel joint learning framework of modeling coreference resolution and query rewriting.
arXiv Detail & Related papers (2021-05-20T17:17:26Z) - Dialogue History Matters! Personalized Response Selectionin Multi-turn
Retrieval-based Chatbots [62.295373408415365]
We propose a personalized hybrid matching network (PHMN) for context-response matching.
Our contributions are two-fold: 1) our model extracts personalized wording behaviors from user-specific dialogue history as extra matching information.
We evaluate our model on two large datasets with user identification, i.e., personalized dialogue Corpus Ubuntu (P- Ubuntu) and personalized Weibo dataset (P-Weibo)
arXiv Detail & Related papers (2021-03-17T09:42:11Z) - Improving Conversational Recommender Systems via Knowledge Graph based
Semantic Fusion [77.21442487537139]
Conversational recommender systems (CRS) aim to recommend high-quality items to users through interactive conversations.
First, the conversation data itself lacks of sufficient contextual information for accurately understanding users' preference.
Second, there is a semantic gap between natural language expression and item-level user preference.
arXiv Detail & Related papers (2020-07-08T11:14:23Z) - Dialogue-Based Relation Extraction [53.2896545819799]
We present the first human-annotated dialogue-based relation extraction (RE) dataset DialogRE.
We argue that speaker-related information plays a critical role in the proposed task, based on an analysis of similarities and differences between dialogue-based and traditional RE tasks.
Experimental results demonstrate that a speaker-aware extension on the best-performing model leads to gains in both the standard and conversational evaluation settings.
arXiv Detail & Related papers (2020-04-17T03:51:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.