Contextual Dialogue Act Classification for Open-Domain Conversational
Agents
- URL: http://arxiv.org/abs/2005.13804v1
- Date: Thu, 28 May 2020 06:48:10 GMT
- Title: Contextual Dialogue Act Classification for Open-Domain Conversational
Agents
- Authors: Ali Ahmadvand, Jason Ingyu Choi, Eugene Agichtein
- Abstract summary: Classifying the general intent of the user utterance in a conversation, also known as Dialogue Act (DA), is a key step in Natural Language Understanding (NLU) for conversational agents.
We propose CDAC (Contextual Dialogue Act), a simple yet effective deep learning approach for contextual dialogue act classification.
We use transfer learning to adapt models trained on human-human conversations to predict dialogue acts in human-machine dialogues.
- Score: 10.576497782941697
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Classifying the general intent of the user utterance in a conversation, also
known as Dialogue Act (DA), e.g., open-ended question, statement of opinion, or
request for an opinion, is a key step in Natural Language Understanding (NLU)
for conversational agents. While DA classification has been extensively studied
in human-human conversations, it has not been sufficiently explored for the
emerging open-domain automated conversational agents. Moreover, despite
significant advances in utterance-level DA classification, full understanding
of dialogue utterances requires conversational context. Another challenge is
the lack of available labeled data for open-domain human-machine conversations.
To address these problems, we propose a novel method, CDAC (Contextual Dialogue
Act Classifier), a simple yet effective deep learning approach for contextual
dialogue act classification. Specifically, we use transfer learning to adapt
models trained on human-human conversations to predict dialogue acts in
human-machine dialogues. To investigate the effectiveness of our method, we
train our model on the well-known Switchboard human-human dialogue dataset, and
fine-tune it for predicting dialogue acts in human-machine conversation data,
collected as part of the Amazon Alexa Prize 2018 competition. The results show
that the CDAC model outperforms an utterance-level state of the art baseline by
8.0% on the Switchboard dataset, and is comparable to the latest reported
state-of-the-art contextual DA classification results. Furthermore, our results
show that fine-tuning the CDAC model on a small sample of manually labeled
human-machine conversations allows CDAC to more accurately predict dialogue
acts in real users' conversations, suggesting a promising direction for future
improvements.
Related papers
- Controllable Mixed-Initiative Dialogue Generation through Prompting [50.03458333265885]
Mixed-initiative dialogue tasks involve repeated exchanges of information and conversational control.
Agents gain control by generating responses that follow particular dialogue intents or strategies, prescribed by a policy planner.
Standard approach has been fine-tuning pre-trained language models to perform generation conditioned on these intents.
We instead prompt large language models as a drop-in replacement to fine-tuning on conditional generation.
arXiv Detail & Related papers (2023-05-06T23:11:25Z) - GODEL: Large-Scale Pre-Training for Goal-Directed Dialog [119.1397031992088]
We introduce GODEL, a large pre-trained language model for dialog.
We show that GODEL outperforms state-of-the-art pre-trained dialog models in few-shot fine-tuning setups.
A novel feature of our evaluation methodology is the introduction of a notion of utility that assesses the usefulness of responses.
arXiv Detail & Related papers (2022-06-22T18:19:32Z) - Response Generation with Context-Aware Prompt Learning [19.340498579331555]
We present a novel approach for pre-trained dialogue modeling that casts the dialogue generation problem as a prompt-learning task.
Instead of fine-tuning on limited dialogue data, our approach, DialogPrompt, learns continuous prompt embeddings optimized for dialogue contexts.
Our approach significantly outperforms the fine-tuning baseline and the generic prompt-learning methods.
arXiv Detail & Related papers (2021-11-04T05:40:13Z) - We've had this conversation before: A Novel Approach to Measuring Dialog
Similarity [9.218829323265371]
We propose a novel adaptation of the edit distance metric to the scenario of dialog similarity.
Our approach takes into account various conversation aspects such as utterance semantics, conversation flow, and the participants.
arXiv Detail & Related papers (2021-10-12T07:24:12Z) - "How Robust r u?": Evaluating Task-Oriented Dialogue Systems on Spoken
Conversations [87.95711406978157]
This work presents a new benchmark on spoken task-oriented conversations.
We study multi-domain dialogue state tracking and knowledge-grounded dialogue modeling.
Our data set enables speech-based benchmarking of task-oriented dialogue systems.
arXiv Detail & Related papers (2021-09-28T04:51:04Z) - Commonsense-Focused Dialogues for Response Generation: An Empirical
Study [39.49727190159279]
We present an empirical study of commonsense in dialogue response generation.
We first auto-extract commonsensical dialogues from existing dialogue datasets by leveraging ConceptNet.
We then collect a new dialogue dataset with 25K dialogues aimed at exhibiting social commonsense in an interactive setting.
arXiv Detail & Related papers (2021-09-14T04:32:09Z) - Speaker Turn Modeling for Dialogue Act Classification [9.124489616470001]
We propose to integrate the turn changes in conversations among speakers when modeling Dialogue Act (DA) classification.
We learn conversation-invariant speaker turn embeddings to represent the speaker turns in a conversation.
Our model is able to capture the semantics from the dialogue content while accounting for different speaker turns in a conversation.
arXiv Detail & Related papers (2021-09-10T18:36:35Z) - Detecting and Classifying Malevolent Dialogue Responses: Taxonomy, Data
and Methodology [68.8836704199096]
Corpus-based conversational interfaces are able to generate more diverse and natural responses than template-based or retrieval-based agents.
With their increased generative capacity of corpusbased conversational agents comes the need to classify and filter out malevolent responses.
Previous studies on the topic of recognizing and classifying inappropriate content are mostly focused on a certain category of malevolence.
arXiv Detail & Related papers (2020-08-21T22:43:27Z) - Is this Dialogue Coherent? Learning from Dialogue Acts and Entities [82.44143808977209]
We create the Switchboard Coherence (SWBD-Coh) corpus, a dataset of human-human spoken dialogues annotated with turn coherence ratings.
Our statistical analysis of the corpus indicates how turn coherence perception is affected by patterns of distribution of entities.
We find that models combining both DA and entity information yield the best performances both for response selection and turn coherence rating.
arXiv Detail & Related papers (2020-06-17T21:02:40Z) - TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented
Dialogue [113.45485470103762]
In this work, we unify nine human-human and multi-turn task-oriented dialogue datasets for language modeling.
To better model dialogue behavior during pre-training, we incorporate user and system tokens into the masked language modeling.
arXiv Detail & Related papers (2020-04-15T04:09:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.