FCC: Fusing Conversation History and Candidate Provenance for Contextual
Response Ranking in Dialogue Systems
- URL: http://arxiv.org/abs/2304.00180v1
- Date: Fri, 31 Mar 2023 23:58:28 GMT
- Title: FCC: Fusing Conversation History and Candidate Provenance for Contextual
Response Ranking in Dialogue Systems
- Authors: Zihao Wang, Eugene Agichtein and Jinho Choi
- Abstract summary: We present a flexible neural framework that can integrate contextual information from multiple channels.
We evaluate our model on the MSDialog dataset widely used for evaluating conversational response ranking tasks.
- Score: 53.89014188309486
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Response ranking in dialogues plays a crucial role in retrieval-based
conversational systems. In a multi-turn dialogue, to capture the gist of a
conversation, contextual information serves as essential knowledge to achieve
this goal. In this paper, we present a flexible neural framework that can
integrate contextual information from multiple channels. Specifically for the
current task, our approach is to provide two information channels in parallel,
Fusing Conversation history and domain knowledge extracted from Candidate
provenance (FCC), where candidate responses are curated, as contextual
information to improve the performance of multi-turn dialogue response ranking.
The proposed approach can be generalized as a module to incorporate
miscellaneous contextual features for other context-oriented tasks. We evaluate
our model on the MSDialog dataset widely used for evaluating conversational
response ranking tasks. Our experimental results show that our framework
significantly outperforms the previous state-of-the-art models, improving
Recall@1 by 7% and MAP by 4%. Furthermore, we conduct ablation studies to
evaluate the contributions of each information channel, and of the framework
components, to the overall ranking performance, providing additional insights
and directions for further improvements.
Related papers
- Joint Learning of Context and Feedback Embeddings in Spoken Dialogue [3.8673630752805446]
We investigate the possibility of embedding short dialogue contexts and feedback responses in the same representation space using a contrastive learning objective.
Our results show that the model outperforms humans given the same ranking task and that the learned embeddings carry information about the conversational function of feedback responses.
arXiv Detail & Related papers (2024-06-11T14:22:37Z) - PICK: Polished & Informed Candidate Scoring for Knowledge-Grounded
Dialogue Systems [59.1250765143521]
Current knowledge-grounded dialogue systems often fail to align the generated responses with human-preferred qualities.
We propose Polished & Informed Candidate Scoring (PICK), a generation re-scoring framework.
We demonstrate the effectiveness of PICK in generating responses that are more faithful while keeping them relevant to the dialogue history.
arXiv Detail & Related papers (2023-09-19T08:27:09Z) - Topic-Aware Response Generation in Task-Oriented Dialogue with
Unstructured Knowledge Access [20.881612071473118]
We propose Topic-Aware Response Generation'' (TARG) to better integrate topical information in task-oriented dialogue.
TARG incorporates multiple topic-aware attention mechanisms to derive the importance weighting scheme over dialogue utterances and external knowledge sources.
arXiv Detail & Related papers (2022-12-10T22:32:28Z) - End-to-end Spoken Conversational Question Answering: Task, Dataset and
Model [92.18621726802726]
In spoken question answering, the systems are designed to answer questions from contiguous text spans within the related speech transcripts.
We propose a new Spoken Conversational Question Answering task (SCQA), aiming at enabling the systems to model complex dialogue flows.
Our main objective is to build the system to deal with conversational questions based on the audio recordings, and to explore the plausibility of providing more cues from different modalities with systems in information gathering.
arXiv Detail & Related papers (2022-04-29T17:56:59Z) - Question rewriting? Assessing its importance for conversational question
answering [0.6449761153631166]
This work presents a conversational question answering system designed specifically for the Search-Oriented Conversational AI (SCAI) shared task.
In particular, we considered different variations of the question rewriting module to evaluate the influence on the subsequent components.
Our system achieved the best performance in the shared task and our analysis emphasizes the importance of the conversation context representation for the overall system performance.
arXiv Detail & Related papers (2022-01-22T23:31:25Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z) - Multi-Stage Conversational Passage Retrieval: An Approach to Fusing Term
Importance Estimation and Neural Query Rewriting [56.268862325167575]
We tackle conversational passage retrieval (ConvPR) with query reformulation integrated into a multi-stage ad-hoc IR system.
We propose two conversational query reformulation (CQR) methods: (1) term importance estimation and (2) neural query rewriting.
For the former, we expand conversational queries using important terms extracted from the conversational context with frequency-based signals.
For the latter, we reformulate conversational queries into natural, standalone, human-understandable queries with a pretrained sequence-tosequence model.
arXiv Detail & Related papers (2020-05-05T14:30:20Z) - Dialogue-Based Relation Extraction [53.2896545819799]
We present the first human-annotated dialogue-based relation extraction (RE) dataset DialogRE.
We argue that speaker-related information plays a critical role in the proposed task, based on an analysis of similarities and differences between dialogue-based and traditional RE tasks.
Experimental results demonstrate that a speaker-aware extension on the best-performing model leads to gains in both the standard and conversational evaluation settings.
arXiv Detail & Related papers (2020-04-17T03:51:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.