Co-GAT: A Co-Interactive Graph Attention Network for Joint Dialog Act
Recognition and Sentiment Classification
- URL: http://arxiv.org/abs/2012.13260v1
- Date: Thu, 24 Dec 2020 14:10:24 GMT
- Title: Co-GAT: A Co-Interactive Graph Attention Network for Joint Dialog Act
Recognition and Sentiment Classification
- Authors: Libo Qin, Zhouyang Li, Wanxiang Che, Minheng Ni, Ting Liu
- Abstract summary: In a dialog system, dialog act recognition and sentiment classification are two correlative tasks.
We propose a Co-Interactive Graph Attention Network (Co-GAT) to jointly perform the two tasks.
Experimental results on two public datasets show that our model successfully captures the two sources of information.
- Score: 34.711179589196355
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In a dialog system, dialog act recognition and sentiment classification are
two correlative tasks to capture speakers intentions, where dialog act and
sentiment can indicate the explicit and the implicit intentions separately. The
dialog context information (contextual information) and the mutual interaction
information are two key factors that contribute to the two related tasks.
Unfortunately, none of the existing approaches consider the two important
sources of information simultaneously. In this paper, we propose a
Co-Interactive Graph Attention Network (Co-GAT) to jointly perform the two
tasks. The core module is a proposed co-interactive graph interaction layer
where a cross-utterances connection and a cross-tasks connection are
constructed and iteratively updated with each other, achieving to consider the
two types of information simultaneously. Experimental results on two public
datasets show that our model successfully captures the two sources of
information and achieve the state-of-the-art performance.
In addition, we find that the contributions from the contextual and mutual
interaction information do not fully overlap with contextualized word
representations (BERT, Roberta, XLNet).
Related papers
- Dial2vec: Self-Guided Contrastive Learning of Unsupervised Dialogue
Embeddings [41.79937481022846]
We introduce the task of learning unsupervised dialogue embeddings.
Trivial approaches such as combining pre-trained word or sentence embeddings and encoding through pre-trained language models have been shown to be feasible.
We propose a self-guided contrastive learning approach named dial2vec.
arXiv Detail & Related papers (2022-10-27T11:14:06Z) - A Hierarchical Interactive Network for Joint Span-based Aspect-Sentiment
Analysis [34.1489054082536]
We propose a hierarchical interactive network (HI-ASA) to model two-way interactions between two tasks appropriately.
We use cross-stitch mechanism to combine the different task-specific features selectively as the input to ensure proper two-way interactions.
Experiments on three real-world datasets demonstrate HI-ASA's superiority over baselines.
arXiv Detail & Related papers (2022-08-24T03:03:49Z) - Context-Aware Interaction Network for Question Matching [51.76812857301819]
We propose a context-aware interaction network (COIN) to align two sequences and infer their semantic relationship.
Specifically, each interaction block includes (1) a context-aware cross-attention mechanism to effectively integrate contextual information, and (2) a gate fusion layer to flexibly interpolate aligned representations.
arXiv Detail & Related papers (2021-04-17T05:03:56Z) - Dialogue History Matters! Personalized Response Selectionin Multi-turn
Retrieval-based Chatbots [62.295373408415365]
We propose a personalized hybrid matching network (PHMN) for context-response matching.
Our contributions are two-fold: 1) our model extracts personalized wording behaviors from user-specific dialogue history as extra matching information.
We evaluate our model on two large datasets with user identification, i.e., personalized dialogue Corpus Ubuntu (P- Ubuntu) and personalized Weibo dataset (P-Weibo)
arXiv Detail & Related papers (2021-03-17T09:42:11Z) - Interactive Fusion of Multi-level Features for Compositional Activity
Recognition [100.75045558068874]
We present a novel framework that accomplishes this goal by interactive fusion.
We implement the framework in three steps, namely, positional-to-appearance feature extraction, semantic feature interaction, and semantic-to-positional prediction.
We evaluate our approach on two action recognition datasets, Something-Something and Charades.
arXiv Detail & Related papers (2020-12-10T14:17:18Z) - A Co-Interactive Transformer for Joint Slot Filling and Intent Detection [61.109486326954205]
Intent detection and slot filling are two main tasks for building a spoken language understanding (SLU) system.
Previous studies either model the two tasks separately or only consider the single information flow from intent to slot.
We propose a Co-Interactive Transformer to consider the cross-impact between the two tasks simultaneously.
arXiv Detail & Related papers (2020-10-08T10:16:52Z) - GraphDialog: Integrating Graph Knowledge into End-to-End Task-Oriented
Dialogue Systems [9.560436630775762]
End-to-end task-oriented dialogue systems aim to generate system responses directly from plain text inputs.
One is how to effectively incorporate external knowledge bases (KBs) into the learning framework; the other is how to accurately capture the semantics of dialogue history.
We address these two challenges by exploiting the graph structural information in the knowledge base and in the dependency parsing tree of the dialogue.
arXiv Detail & Related papers (2020-10-04T00:04:40Z) - DCR-Net: A Deep Co-Interactive Relation Network for Joint Dialog Act
Recognition and Sentiment Classification [77.59549450705384]
In dialog system, dialog act recognition and sentiment classification are two correlative tasks.
Most of the existing systems either treat them as separate tasks or just jointly model the two tasks.
We propose a Deep Co-Interactive Relation Network (DCR-Net) to explicitly consider the cross-impact and model the interaction between the two tasks.
arXiv Detail & Related papers (2020-08-16T14:13:32Z) - Cascaded Human-Object Interaction Recognition [175.60439054047043]
We introduce a cascade architecture for a multi-stage, coarse-to-fine HOI understanding.
At each stage, an instance localization network progressively refines HOI proposals and feeds them into an interaction recognition network.
With our carefully-designed human-centric relation features, these two modules work collaboratively towards effective interaction understanding.
arXiv Detail & Related papers (2020-03-09T17:05:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.