Grounding Conversations with Improvised Dialogues
- URL: http://arxiv.org/abs/2004.09544v2
- Date: Tue, 19 May 2020 05:34:13 GMT
- Title: Grounding Conversations with Improvised Dialogues
- Authors: Hyundong Cho, Jonathan May
- Abstract summary: We collect a corpus of more than 26,000 yes-and turns, transcribing them from improv dialogues and extracting them from larger, but more sparsely populated movie script dialogue corpora.
We fine-tune chit-chat dialogue systems with our corpus to encourage more grounded, relevant conversation and confirm these findings with human evaluations.
- Score: 25.486608189901705
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Effective dialogue involves grounding, the process of establishing mutual
knowledge that is essential for communication between people. Modern dialogue
systems are not explicitly trained to build common ground, and therefore
overlook this important aspect of communication. Improvisational theater
(improv) intrinsically contains a high proportion of dialogue focused on
building common ground, and makes use of the yes-and principle, a strong
grounding speech act, to establish coherence and an actionable objective
reality. We collect a corpus of more than 26,000 yes-and turns, transcribing
them from improv dialogues and extracting them from larger, but more sparsely
populated movie script dialogue corpora, via a bootstrapped classifier. We
fine-tune chit-chat dialogue systems with our corpus to encourage more
grounded, relevant conversation and confirm these findings with human
evaluations.
Related papers
- Interactive Dialogue Agents via Reinforcement Learning on Hindsight Regenerations [58.65755268815283]
Many real dialogues are interactive, meaning an agent's utterances will influence their conversational partner, elicit information, or change their opinion.
We use this fact to rewrite and augment existing suboptimal data, and train via offline reinforcement learning (RL) an agent that outperforms both prompting and learning from unaltered human demonstrations.
Our results in a user study with real humans show that our approach greatly outperforms existing state-of-the-art dialogue agents.
arXiv Detail & Related papers (2024-11-07T21:37:51Z) - Conversational Grounding: Annotation and Analysis of Grounding Acts and Grounding Units [3.805394793605586]
We present the annotation of two dialog corpora employing Grounding Acts, Grounding Units, and a measure of their degree of grounding.
Our work aims to make conversations with machines better understood and more reliable in natural day-to-day collaborative dialogs.
arXiv Detail & Related papers (2024-03-25T10:39:18Z) - HybriDialogue: An Information-Seeking Dialogue Dataset Grounded on
Tabular and Textual Data [87.67278915655712]
We present a new dialogue dataset, HybriDialogue, which consists of crowdsourced natural conversations grounded on both Wikipedia text and tables.
The conversations are created through the decomposition of complex multihop questions into simple, realistic multiturn dialogue interactions.
arXiv Detail & Related papers (2022-04-28T00:52:16Z) - Learning-by-Narrating: Narrative Pre-Training for Zero-Shot Dialogue
Comprehension [48.483910831143724]
Comprehending a dialogue requires a model to capture diverse kinds of key information in the utterances.
We develop a novel narrative-guided pre-training strategy that learns by narrating the key information from a dialogue input.
arXiv Detail & Related papers (2022-03-19T05:20:25Z) - A Review of Dialogue Systems: From Trained Monkeys to Stochastic Parrots [0.0]
We aim to deploy artificial intelligence to build automated dialogue agents that can converse with humans.
We present a broad overview of methods developed to build dialogue systems over the years.
arXiv Detail & Related papers (2021-11-02T08:07:55Z) - Structural Modeling for Dialogue Disentanglement [43.352833140317486]
Multi-party dialogue context Tangled multi-party dialogue context leads to challenges for dialogue reading comprehension.
This work designs a novel model to disentangle multi-party history into threads, by taking dialogue structure features into account.
arXiv Detail & Related papers (2021-10-15T11:28:43Z) - "How Robust r u?": Evaluating Task-Oriented Dialogue Systems on Spoken
Conversations [87.95711406978157]
This work presents a new benchmark on spoken task-oriented conversations.
We study multi-domain dialogue state tracking and knowledge-grounded dialogue modeling.
Our data set enables speech-based benchmarking of task-oriented dialogue systems.
arXiv Detail & Related papers (2021-09-28T04:51:04Z) - DialogueBERT: A Self-Supervised Learning based Dialogue Pre-training
Encoder [19.51263716065853]
We propose a novel contextual dialogue encoder (i.e. DialogueBERT) based on the popular pre-trained language model BERT.
Five self-supervised learning pre-training tasks are devised for learning the particularity of dialouge utterances.
DialogueBERT was pre-trained with 70 million dialogues in real scenario, and then fine-tuned in three different downstream dialogue understanding tasks.
arXiv Detail & Related papers (2021-09-22T01:41:28Z) - DialogLM: Pre-trained Model for Long Dialogue Understanding and
Summarization [19.918194137007653]
We present a pre-training framework for long dialogue understanding and summarization.
Considering the nature of long conversations, we propose a window-based denoising approach for generative pre-training.
We conduct extensive experiments on five datasets of long dialogues, covering tasks of dialogue summarization, abstractive question answering and topic segmentation.
arXiv Detail & Related papers (2021-09-06T13:55:03Z) - Rethinking Dialogue State Tracking with Reasoning [76.0991910623001]
This paper proposes to track dialogue states gradually with reasoning over dialogue turns with the help of the back-end data.
Empirical results demonstrate that our method significantly outperforms the state-of-the-art methods by 38.6% in terms of joint belief accuracy for MultiWOZ 2.1.
arXiv Detail & Related papers (2020-05-27T02:05:33Z) - Will I Sound Like Me? Improving Persona Consistency in Dialogues through
Pragmatic Self-Consciousness [62.55060760615656]
Recent models tackling consistency often train with additional Natural Language Inference (NLI) labels or attach trained extra modules to the generative agent for maintaining consistency.
Inspired by social cognition and pragmatics, we endow existing dialogue agents with public self-consciousness on the fly through an imaginary listener.
Our approach, based on the Rational Speech Acts framework, can enforce dialogue agents to refrain from uttering contradiction.
arXiv Detail & Related papers (2020-04-13T08:16:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.