Local Contextual Attention with Hierarchical Structure for Dialogue Act
Recognition
- URL: http://arxiv.org/abs/2003.06044v1
- Date: Thu, 12 Mar 2020 22:26:11 GMT
- Title: Local Contextual Attention with Hierarchical Structure for Dialogue Act
Recognition
- Authors: Zhigang Dai, Jinhua Fu, Qile Zhu, Hengbin Cui, Xiaolong li, Yuan Qi
- Abstract summary: We design a hierarchical model based on self-attention to capture intra-sentence and inter-sentence information.
Based on the found that the length of dialog affects the performance, we introduce a new dialog segmentation mechanism.
- Score: 14.81680798372891
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dialogue act recognition is a fundamental task for an intelligent dialogue
system. Previous work models the whole dialog to predict dialog acts, which may
bring the noise from unrelated sentences. In this work, we design a
hierarchical model based on self-attention to capture intra-sentence and
inter-sentence information. We revise the attention distribution to focus on
the local and contextual semantic information by incorporating the relative
position information between utterances. Based on the found that the length of
dialog affects the performance, we introduce a new dialog segmentation
mechanism to analyze the effect of dialog length and context padding length
under online and offline settings. The experiment shows that our method
achieves promising performance on two datasets: Switchboard Dialogue Act and
DailyDialog with the accuracy of 80.34\% and 85.81\% respectively.
Visualization of the attention weights shows that our method can learn the
context dependency between utterances explicitly.
Related papers
- A Static and Dynamic Attention Framework for Multi Turn Dialogue Generation [37.79563028123686]
In open domain multi turn dialogue generation, it is essential to modeling the contextual semantics of the dialogue history.
Previous research had verified the effectiveness of the hierarchical recurrent encoder-decoder framework on open domain multi turn dialogue generation.
We propose a static and dynamic attention-based approach to model the dialogue history and then generate open domain multi turn dialogue responses.
arXiv Detail & Related papers (2024-10-28T06:05:34Z) - Multi-turn Dialogue Comprehension from a Topic-aware Perspective [70.37126956655985]
This paper proposes to model multi-turn dialogues from a topic-aware perspective.
We use a dialogue segmentation algorithm to split a dialogue passage into topic-concentrated fragments in an unsupervised way.
We also present a novel model, Topic-Aware Dual-Attention Matching (TADAM) Network, which takes topic segments as processing elements.
arXiv Detail & Related papers (2023-09-18T11:03:55Z) - Revisiting Conversation Discourse for Dialogue Disentanglement [88.3386821205896]
We propose enhancing dialogue disentanglement by taking full advantage of the dialogue discourse characteristics.
We develop a structure-aware framework to integrate the rich structural features for better modeling the conversational semantic context.
Our work has great potential to facilitate broader multi-party multi-thread dialogue applications.
arXiv Detail & Related papers (2023-06-06T19:17:47Z) - Hierarchical Dialogue Understanding with Special Tokens and Turn-level
Attention [19.03781524017955]
We propose a simple but effective Hierarchical Dialogue Understanding model, HiDialog.
We first insert multiple special tokens into a dialogue and propose the turn-level attention to learn turn embeddings hierarchically.
We evaluate our model on various dialogue understanding tasks including dialogue relation extraction, dialogue emotion recognition, and dialogue act classification.
arXiv Detail & Related papers (2023-04-29T13:53:48Z) - CTRLStruct: Dialogue Structure Learning for Open-Domain Response
Generation [38.60073402817218]
Well-structured topic flow can leverage background information and predict future topics to help generate controllable and explainable responses.
We present a new framework for dialogue structure learning to effectively explore topic-level dialogue clusters as well as their transitions with unlabelled information.
Experiments on two popular open-domain dialogue datasets show our model can generate more coherent responses compared to some excellent dialogue models.
arXiv Detail & Related papers (2023-03-02T09:27:11Z) - SPACE-2: Tree-Structured Semi-Supervised Contrastive Pre-training for
Task-Oriented Dialog Understanding [68.94808536012371]
We propose a tree-structured pre-trained conversation model, which learns dialog representations from limited labeled dialogs and large-scale unlabeled dialog corpora.
Our method can achieve new state-of-the-art results on the DialoGLUE benchmark consisting of seven datasets and four popular dialog understanding tasks.
arXiv Detail & Related papers (2022-09-14T13:42:50Z) - Structure Extraction in Task-Oriented Dialogues with Slot Clustering [94.27806592467537]
In task-oriented dialogues, dialogue structure has often been considered as transition graphs among dialogue states.
We propose a simple yet effective approach for structure extraction in task-oriented dialogues.
arXiv Detail & Related papers (2022-02-28T20:18:12Z) - What Helps Transformers Recognize Conversational Structure? Importance
of Context, Punctuation, and Labels in Dialog Act Recognition [41.1669799542627]
We apply two pre-trained transformer models to structure a conversational transcript as a sequence of dialog acts.
We find that the inclusion of a broader conversational context helps disambiguate many dialog act classes.
A detailed analysis reveals specific segmentation patterns observed in its absence.
arXiv Detail & Related papers (2021-07-05T21:56:00Z) - Dialogue History Matters! Personalized Response Selectionin Multi-turn
Retrieval-based Chatbots [62.295373408415365]
We propose a personalized hybrid matching network (PHMN) for context-response matching.
Our contributions are two-fold: 1) our model extracts personalized wording behaviors from user-specific dialogue history as extra matching information.
We evaluate our model on two large datasets with user identification, i.e., personalized dialogue Corpus Ubuntu (P- Ubuntu) and personalized Weibo dataset (P-Weibo)
arXiv Detail & Related papers (2021-03-17T09:42:11Z) - Learning Reasoning Paths over Semantic Graphs for Video-grounded
Dialogues [73.04906599884868]
We propose a novel framework of Reasoning Paths in Dialogue Context (PDC)
PDC model discovers information flows among dialogue turns through a semantic graph constructed based on lexical components in each question and answer.
Our model sequentially processes both visual and textual information through this reasoning path and the propagated features are used to generate the answer.
arXiv Detail & Related papers (2021-03-01T07:39:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.