A Universality-Individuality Integration Model for Dialog Act
Classification
- URL: http://arxiv.org/abs/2204.06185v1
- Date: Wed, 13 Apr 2022 06:05:34 GMT
- Title: A Universality-Individuality Integration Model for Dialog Act
Classification
- Authors: Gao Pengfei and Ma Yinglong
- Abstract summary: Dialog Act (DA) reveals the general intent of the speaker utterance in a conversation.
This paper suggests that word cues, part-of-speech cues and statistical cues can complement each other to improve the basis for recognition.
We propose a novel model based on universality and individuality strategies, called Universality-Individuality Integration Model (UIIM)
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dialog Act (DA) reveals the general intent of the speaker utterance in a
conversation. Accurately predicting DAs can greatly facilitate the development
of dialog agents. Although researchers have done extensive research on dialog
act classification, the feature information of classification has not been
fully considered. This paper suggests that word cues, part-of-speech cues and
statistical cues can complement each other to improve the basis for
recognition. In addition, the different types of the three lead to the
diversity of their distribution forms, which hinders the mining of feature
information. To solve this problem, we propose a novel model based on
universality and individuality strategies, called Universality-Individuality
Integration Model (UIIM). UIIM not only deepens the connection between the
clues by learning universality, but also utilizes the learning of individuality
to capture the characteristics of the clues themselves. Experiments were made
over two most popular benchmark data sets SwDA and MRDA for dialogue act
classification, and the results show that extracting the universalities and
individualities between cues can more fully excavate the hidden information in
the utterance, and improve the accuracy of automatic dialogue act recognition.
Related papers
- A Bi-directional Multi-hop Inference Model for Joint Dialog Sentiment
Classification and Act Recognition [25.426172735931463]
The joint task of Dialog Sentiment Classification (DSC) and Act Recognition (DAR) aims to predict the sentiment label and act label for each utterance in a dialog simultaneously.
We propose a Bi-directional Multi-hop Inference Model (BMIM) that iteratively extract and integrate rich sentiment and act clues in a bi-directional manner.
BMIM outperforms state-of-the-art baselines by at least 2.6% on F1 score in DAR and 1.4% on F1 score in DSC.
arXiv Detail & Related papers (2023-08-08T17:53:24Z) - SHINE: Syntax-augmented Hierarchical Interactive Encoder for Zero-shot
Cross-lingual Information Extraction [47.88887327545667]
In this study, a syntax-augmented hierarchical interactive encoder (SHINE) is proposed to transfer cross-lingual IE knowledge.
SHINE is capable of interactively capturing complementary information between features and contextual information.
Experiments across seven languages on three IE tasks and four benchmarks verify the effectiveness and generalization ability of the proposed method.
arXiv Detail & Related papers (2023-05-21T08:02:06Z) - CGoDial: A Large-Scale Benchmark for Chinese Goal-oriented Dialog
Evaluation [75.60156479374416]
CGoDial is a new challenging and comprehensive Chinese benchmark for Goal-oriented Dialog evaluation.
It contains 96,763 dialog sessions and 574,949 dialog turns totally, covering three datasets with different knowledge sources.
To bridge the gap between academic benchmarks and spoken dialog scenarios, we either collect data from real conversations or add spoken features to existing datasets via crowd-sourcing.
arXiv Detail & Related papers (2022-11-21T16:21:41Z) - SPACE-2: Tree-Structured Semi-Supervised Contrastive Pre-training for
Task-Oriented Dialog Understanding [68.94808536012371]
We propose a tree-structured pre-trained conversation model, which learns dialog representations from limited labeled dialogs and large-scale unlabeled dialog corpora.
Our method can achieve new state-of-the-art results on the DialoGLUE benchmark consisting of seven datasets and four popular dialog understanding tasks.
arXiv Detail & Related papers (2022-09-14T13:42:50Z) - Back to the Future: Bidirectional Information Decoupling Network for
Multi-turn Dialogue Modeling [80.51094098799736]
We propose Bidirectional Information Decoupling Network (BiDeN) as a universal dialogue encoder.
BiDeN explicitly incorporates both the past and future contexts and can be generalized to a wide range of dialogue-related tasks.
Experimental results on datasets of different downstream tasks demonstrate the universality and effectiveness of our BiDeN.
arXiv Detail & Related papers (2022-04-18T03:51:46Z) - Commonsense-Focused Dialogues for Response Generation: An Empirical
Study [39.49727190159279]
We present an empirical study of commonsense in dialogue response generation.
We first auto-extract commonsensical dialogues from existing dialogue datasets by leveraging ConceptNet.
We then collect a new dialogue dataset with 25K dialogues aimed at exhibiting social commonsense in an interactive setting.
arXiv Detail & Related papers (2021-09-14T04:32:09Z) - Language Model as an Annotator: Exploring DialoGPT for Dialogue
Summarization [29.887562761942114]
We show how DialoGPT, a pre-trained model for conversational response generation, can be developed as an unsupervised dialogue annotator.
We apply DialoGPT to label three types of features on two dialogue summarization datasets, SAMSum and AMI, and employ pre-trained and non pre-trained models as our summarizes.
arXiv Detail & Related papers (2021-05-26T13:50:13Z) - Dialogue History Matters! Personalized Response Selectionin Multi-turn
Retrieval-based Chatbots [62.295373408415365]
We propose a personalized hybrid matching network (PHMN) for context-response matching.
Our contributions are two-fold: 1) our model extracts personalized wording behaviors from user-specific dialogue history as extra matching information.
We evaluate our model on two large datasets with user identification, i.e., personalized dialogue Corpus Ubuntu (P- Ubuntu) and personalized Weibo dataset (P-Weibo)
arXiv Detail & Related papers (2021-03-17T09:42:11Z) - Incorporating Commonsense Knowledge into Abstractive Dialogue
Summarization via Heterogeneous Graph Networks [34.958271247099]
We present a novel multi-speaker dialogue summarizer to demonstrate how large-scale commonsense knowledge can facilitate dialogue understanding and summary generation.
We consider utterance and commonsense knowledge as two different types of data and design a Dialogue Heterogeneous Graph Network (D-HGN) for modeling both information.
arXiv Detail & Related papers (2020-10-20T05:44:55Z) - Detecting and Classifying Malevolent Dialogue Responses: Taxonomy, Data
and Methodology [68.8836704199096]
Corpus-based conversational interfaces are able to generate more diverse and natural responses than template-based or retrieval-based agents.
With their increased generative capacity of corpusbased conversational agents comes the need to classify and filter out malevolent responses.
Previous studies on the topic of recognizing and classifying inappropriate content are mostly focused on a certain category of malevolence.
arXiv Detail & Related papers (2020-08-21T22:43:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.