Utterance-level Dialogue Understanding: An Empirical Study
- URL: http://arxiv.org/abs/2009.13902v5
- Date: Thu, 22 Oct 2020 11:16:56 GMT
- Title: Utterance-level Dialogue Understanding: An Empirical Study
- Authors: Deepanway Ghosal, Navonil Majumder, Rada Mihalcea, Soujanya Poria
- Abstract summary: This paper explores and quantify the role of context for different aspects of a dialogue.
Specifically, we employ various perturbations to distort the context of a given utterance.
This provides us with insights into the fundamental contextual controlling factors of different aspects of a dialogue.
- Score: 43.35258958775454
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The recent abundance of conversational data on the Web and elsewhere calls
for effective NLP systems for dialog understanding. Complete utterance-level
understanding often requires context understanding, defined by nearby
utterances. In recent years, a number of approaches have been proposed for
various utterance-level dialogue understanding tasks. Most of these approaches
account for the context for effective understanding. In this paper, we explore
and quantify the role of context for different aspects of a dialogue, namely
emotion, intent, and dialogue act identification, using state-of-the-art dialog
understanding methods as baselines. Specifically, we employ various
perturbations to distort the context of a given utterance and study its impact
on the different tasks and baselines. This provides us with insights into the
fundamental contextual controlling factors of different aspects of a dialogue.
Such insights can inspire more effective dialogue understanding models, and
provide support for future text generation approaches. The implementation
pertaining to this work is available at
https://github.com/declare-lab/dialogue-understanding.
Related papers
- Revisiting Conversation Discourse for Dialogue Disentanglement [88.3386821205896]
We propose enhancing dialogue disentanglement by taking full advantage of the dialogue discourse characteristics.
We develop a structure-aware framework to integrate the rich structural features for better modeling the conversational semantic context.
Our work has great potential to facilitate broader multi-party multi-thread dialogue applications.
arXiv Detail & Related papers (2023-06-06T19:17:47Z) - Advances in Multi-turn Dialogue Comprehension: A Survey [51.215629336320305]
Training machines to understand natural language and interact with humans is an elusive and essential task of artificial intelligence.
This paper reviews the previous methods from the technical perspective of dialogue modeling for the dialogue comprehension task.
In addition, we categorize dialogue-related pre-training techniques which are employed to enhance PrLMs in dialogue scenarios.
arXiv Detail & Related papers (2021-10-11T03:52:37Z) - Who says like a style of Vitamin: Towards Syntax-Aware
DialogueSummarization using Multi-task Learning [2.251583286448503]
We focus on the association between utterances from individual speakers and unique syntactic structures.
Speakers have unique textual styles that can contain linguistic information, such as voiceprint.
We employ multi-task learning of both syntax-aware information and dialogue summarization.
arXiv Detail & Related papers (2021-09-29T05:30:39Z) - Advances in Multi-turn Dialogue Comprehension: A Survey [51.215629336320305]
We review the previous methods from the perspective of dialogue modeling.
We discuss three typical patterns of dialogue modeling that are widely-used in dialogue comprehension tasks.
arXiv Detail & Related papers (2021-03-04T15:50:17Z) - Rethinking Dialogue State Tracking with Reasoning [76.0991910623001]
This paper proposes to track dialogue states gradually with reasoning over dialogue turns with the help of the back-end data.
Empirical results demonstrate that our method significantly outperforms the state-of-the-art methods by 38.6% in terms of joint belief accuracy for MultiWOZ 2.1.
arXiv Detail & Related papers (2020-05-27T02:05:33Z) - Masking Orchestration: Multi-task Pretraining for Multi-role Dialogue
Representation Learning [50.5572111079898]
Multi-role dialogue understanding comprises a wide range of diverse tasks such as question answering, act classification, dialogue summarization etc.
While dialogue corpora are abundantly available, labeled data, for specific learning tasks, can be highly scarce and expensive.
In this work, we investigate dialogue context representation learning with various types unsupervised pretraining tasks.
arXiv Detail & Related papers (2020-02-27T04:36:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.