Is this Dialogue Coherent? Learning from Dialogue Acts and Entities
- URL: http://arxiv.org/abs/2006.10157v1
- Date: Wed, 17 Jun 2020 21:02:40 GMT
- Title: Is this Dialogue Coherent? Learning from Dialogue Acts and Entities
- Authors: Alessandra Cervone, Giuseppe Riccardi
- Abstract summary: We create the Switchboard Coherence (SWBD-Coh) corpus, a dataset of human-human spoken dialogues annotated with turn coherence ratings.
Our statistical analysis of the corpus indicates how turn coherence perception is affected by patterns of distribution of entities.
We find that models combining both DA and entity information yield the best performances both for response selection and turn coherence rating.
- Score: 82.44143808977209
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we investigate the human perception of coherence in open-domain
dialogues. In particular, we address the problem of annotating and modeling the
coherence of next-turn candidates while considering the entire history of the
dialogue. First, we create the Switchboard Coherence (SWBD-Coh) corpus, a
dataset of human-human spoken dialogues annotated with turn coherence ratings,
where next-turn candidate utterances ratings are provided considering the full
dialogue context. Our statistical analysis of the corpus indicates how turn
coherence perception is affected by patterns of distribution of entities
previously introduced and the Dialogue Acts used. Second, we experiment with
different architectures to model entities, Dialogue Acts and their combination
and evaluate their performance in predicting human coherence ratings on
SWBD-Coh. We find that models combining both DA and entity information yield
the best performances both for response selection and turn coherence rating.
Related papers
- Dialogue Agents 101: A Beginner's Guide to Critical Ingredients for Designing Effective Conversational Systems [29.394466123216258]
This study provides a comprehensive overview of the primary characteristics of a dialogue agent, their corresponding open-domain datasets, and the methods used to benchmark these datasets.
We propose UNIT, a UNified dIalogue dataseT constructed from conversations of existing datasets for different dialogue tasks capturing the nuances for each of them.
arXiv Detail & Related papers (2023-07-14T10:05:47Z) - Unsupervised Dialogue Topic Segmentation with Topic-aware Utterance
Representation [51.22712675266523]
Dialogue Topic (DTS) plays an essential role in a variety of dialogue modeling tasks.
We propose a novel unsupervised DTS framework, which learns topic-aware utterance representations from unlabeled dialogue data.
arXiv Detail & Related papers (2023-05-04T11:35:23Z) - FCC: Fusing Conversation History and Candidate Provenance for Contextual
Response Ranking in Dialogue Systems [53.89014188309486]
We present a flexible neural framework that can integrate contextual information from multiple channels.
We evaluate our model on the MSDialog dataset widely used for evaluating conversational response ranking tasks.
arXiv Detail & Related papers (2023-03-31T23:58:28Z) - Commonsense-Focused Dialogues for Response Generation: An Empirical
Study [39.49727190159279]
We present an empirical study of commonsense in dialogue response generation.
We first auto-extract commonsensical dialogues from existing dialogue datasets by leveraging ConceptNet.
We then collect a new dialogue dataset with 25K dialogues aimed at exhibiting social commonsense in an interactive setting.
arXiv Detail & Related papers (2021-09-14T04:32:09Z) - Graph Based Network with Contextualized Representations of Turns in
Dialogue [0.0]
Dialogue-based relation extraction (RE) aims to extract relation(s) between two arguments that appear in a dialogue.
We propose the TUrn COntext awaRE Graph Convolutional Network (TUCORE-GCN) modeled by paying attention to the way people understand dialogues.
arXiv Detail & Related papers (2021-09-09T03:09:08Z) - DynaEval: Unifying Turn and Dialogue Level Evaluation [60.66883575106898]
We propose DynaEval, a unified automatic evaluation framework.
It is capable of performing turn-level evaluation, but also holistically considers the quality of the entire dialogue.
Experiments show that DynaEval significantly outperforms the state-of-the-art dialogue coherence model.
arXiv Detail & Related papers (2021-06-02T12:23:18Z) - Contextual Dialogue Act Classification for Open-Domain Conversational
Agents [10.576497782941697]
Classifying the general intent of the user utterance in a conversation, also known as Dialogue Act (DA), is a key step in Natural Language Understanding (NLU) for conversational agents.
We propose CDAC (Contextual Dialogue Act), a simple yet effective deep learning approach for contextual dialogue act classification.
We use transfer learning to adapt models trained on human-human conversations to predict dialogue acts in human-machine dialogues.
arXiv Detail & Related papers (2020-05-28T06:48:10Z) - Rethinking Dialogue State Tracking with Reasoning [76.0991910623001]
This paper proposes to track dialogue states gradually with reasoning over dialogue turns with the help of the back-end data.
Empirical results demonstrate that our method significantly outperforms the state-of-the-art methods by 38.6% in terms of joint belief accuracy for MultiWOZ 2.1.
arXiv Detail & Related papers (2020-05-27T02:05:33Z) - Dialogue-Based Relation Extraction [53.2896545819799]
We present the first human-annotated dialogue-based relation extraction (RE) dataset DialogRE.
We argue that speaker-related information plays a critical role in the proposed task, based on an analysis of similarities and differences between dialogue-based and traditional RE tasks.
Experimental results demonstrate that a speaker-aware extension on the best-performing model leads to gains in both the standard and conversational evaluation settings.
arXiv Detail & Related papers (2020-04-17T03:51:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.