Dialogue History Matters! Personalized Response Selectionin Multi-turn
Retrieval-based Chatbots
- URL: http://arxiv.org/abs/2103.09534v1
- Date: Wed, 17 Mar 2021 09:42:11 GMT
- Title: Dialogue History Matters! Personalized Response Selectionin Multi-turn
Retrieval-based Chatbots
- Authors: Juntao Li, Chang Liu, Chongyang Tao, Zhangming Chan, Dongyan Zhao, Min
Zhang, Rui Yan
- Abstract summary: We propose a personalized hybrid matching network (PHMN) for context-response matching.
Our contributions are two-fold: 1) our model extracts personalized wording behaviors from user-specific dialogue history as extra matching information.
We evaluate our model on two large datasets with user identification, i.e., personalized dialogue Corpus Ubuntu (P- Ubuntu) and personalized Weibo dataset (P-Weibo)
- Score: 62.295373408415365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing multi-turn context-response matching methods mainly concentrate on
obtaining multi-level and multi-dimension representations and better
interactions between context utterances and response. However, in real-place
conversation scenarios, whether a response candidate is suitable not only
counts on the given dialogue context but also other backgrounds, e.g., wording
habits, user-specific dialogue history content. To fill the gap between these
up-to-date methods and the real-world applications, we incorporate
user-specific dialogue history into the response selection and propose a
personalized hybrid matching network (PHMN). Our contributions are two-fold: 1)
our model extracts personalized wording behaviors from user-specific dialogue
history as extra matching information; 2) we perform hybrid representation
learning on context-response utterances and explicitly incorporate a customized
attention mechanism to extract vital information from context-response
interactions so as to improve the accuracy of matching. We evaluate our model
on two large datasets with user identification, i.e., personalized Ubuntu
dialogue Corpus (P-Ubuntu) and personalized Weibo dataset (P-Weibo).
Experimental results confirm that our method significantly outperforms several
strong models by combining personalized attention, wording behaviors, and
hybrid representation learning.
Related papers
- UniMS-RAG: A Unified Multi-source Retrieval-Augmented Generation for Personalized Dialogue Systems [43.266153244137215]
Large Language Models (LLMs) has shown exceptional capabilities in many natual language understanding and generation tasks.
We decompose the use of multiple sources in generating personalized response into three sub-tasks: Knowledge Source Selection, Knowledge Retrieval, and Response Generation.
We propose a novel Unified Multi-Source Retrieval-Augmented Generation system (UniMS-RAG)
arXiv Detail & Related papers (2024-01-24T06:50:20Z) - Harmonizing Code-mixed Conversations: Personality-assisted Code-mixed
Response Generation in Dialogues [28.49660948650183]
We introduce a novel approach centered on harnessing the Big Five personality traits acquired in an unsupervised manner from the conversations to bolster the performance of response generation.
This is evident in the increase observed in ROUGE and BLUE scores for the response generation task when the identified personality is seamlessly integrated into the dialogue context.
arXiv Detail & Related papers (2024-01-18T15:21:16Z) - EM Pre-training for Multi-party Dialogue Response Generation [86.25289241604199]
In multi-party dialogues, the addressee of a response utterance should be specified before it is generated.
We propose an Expectation-Maximization (EM) approach that iteratively performs the expectation steps to generate addressee labels.
arXiv Detail & Related papers (2023-05-21T09:22:41Z) - Mitigating Negative Style Transfer in Hybrid Dialogue System [42.65754135759929]
Hybrid dialogue systems that accomplish user-specific goals and participate in open-topic chitchat with users are attracting growing attention.
Existing research learns both tasks concurrently utilizing a multi-task fusion technique but ignores the negative transfer phenomenon induced by the unique textual style differences.
We devise supervised and self-supervised positive and negative sample constructions for diverse datasets.
arXiv Detail & Related papers (2022-12-14T12:13:34Z) - Less is More: Learning to Refine Dialogue History for Personalized
Dialogue Generation [57.73547958927826]
We propose to refine the user dialogue history on a large scale, based on which we can handle more dialogue history and obtain more accurate persona information.
Specifically, we design an MSP model which consists of three personal information refiners and a personalized response generator.
arXiv Detail & Related papers (2022-04-18T02:02:56Z) - Who says like a style of Vitamin: Towards Syntax-Aware
DialogueSummarization using Multi-task Learning [2.251583286448503]
We focus on the association between utterances from individual speakers and unique syntactic structures.
Speakers have unique textual styles that can contain linguistic information, such as voiceprint.
We employ multi-task learning of both syntax-aware information and dialogue summarization.
arXiv Detail & Related papers (2021-09-29T05:30:39Z) - Commonsense-Focused Dialogues for Response Generation: An Empirical
Study [39.49727190159279]
We present an empirical study of commonsense in dialogue response generation.
We first auto-extract commonsensical dialogues from existing dialogue datasets by leveraging ConceptNet.
We then collect a new dialogue dataset with 25K dialogues aimed at exhibiting social commonsense in an interactive setting.
arXiv Detail & Related papers (2021-09-14T04:32:09Z) - Reasoning in Dialog: Improving Response Generation by Context Reading
Comprehension [49.92173751203827]
In multi-turn dialog, utterances do not always take the full form of sentences.
We propose to improve the response generation performance by examining the model's ability to answer a reading comprehension question.
arXiv Detail & Related papers (2020-12-14T10:58:01Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z) - Dialogue-Based Relation Extraction [53.2896545819799]
We present the first human-annotated dialogue-based relation extraction (RE) dataset DialogRE.
We argue that speaker-related information plays a critical role in the proposed task, based on an analysis of similarities and differences between dialogue-based and traditional RE tasks.
Experimental results demonstrate that a speaker-aware extension on the best-performing model leads to gains in both the standard and conversational evaluation settings.
arXiv Detail & Related papers (2020-04-17T03:51:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.