Persuasive Dialogue Understanding: the Baselines and Negative Results
- URL: http://arxiv.org/abs/2011.09954v2
- Date: Sun, 22 Nov 2020 18:27:51 GMT
- Title: Persuasive Dialogue Understanding: the Baselines and Negative Results
- Authors: Hui Chen, Deepanway Ghosal, Navonil Majumder, Amir Hussain, Soujanya
Poria
- Abstract summary: We demonstrate the limitations of a Transformer-based approach coupled with Conditional Random Field (CRF) for the task of persuasive strategy recognition.
We leverage inter- and intra-speaker contextual semantic features, as well as label dependencies to improve the recognition.
- Score: 27.162062321321805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Persuasion aims at forming one's opinion and action via a series of
persuasive messages containing persuader's strategies. Due to its potential
application in persuasive dialogue systems, the task of persuasive strategy
recognition has gained much attention lately. Previous methods on user intent
recognition in dialogue systems adopt recurrent neural network (RNN) or
convolutional neural network (CNN) to model context in conversational history,
neglecting the tactic history and intra-speaker relation. In this paper, we
demonstrate the limitations of a Transformer-based approach coupled with
Conditional Random Field (CRF) for the task of persuasive strategy recognition.
In this model, we leverage inter- and intra-speaker contextual semantic
features, as well as label dependencies to improve the recognition. Despite
extensive hyper-parameter optimizations, this architecture fails to outperform
the baseline methods. We observe two negative results. Firstly, CRF cannot
capture persuasive label dependencies, possibly as strategies in persuasive
dialogues do not follow any strict grammar or rules as the cases in Named
Entity Recognition (NER) or part-of-speech (POS) tagging. Secondly, the
Transformer encoder trained from scratch is less capable of capturing
sequential information in persuasive dialogues than Long Short-Term Memory
(LSTM). We attribute this to the reason that the vanilla Transformer encoder
does not efficiently consider relative position information of sequence
elements.
Related papers
- Improved Contextual Recognition In Automatic Speech Recognition Systems
By Semantic Lattice Rescoring [4.819085609772069]
We propose a novel approach for enhancing contextual recognition within ASR systems via semantic lattice processing.
Our solution consists of using Hidden Markov Models and Gaussian Mixture Models (HMM-GMM) along with Deep Neural Networks (DNN) models for better accuracy.
We demonstrate the effectiveness of our proposed framework on the LibriSpeech dataset with empirical analyses.
arXiv Detail & Related papers (2023-10-14T23:16:05Z) - Improving Factual Consistency for Knowledge-Grounded Dialogue Systems
via Knowledge Enhancement and Alignment [77.56326872997407]
Pretrained language models (PLMs) based knowledge-grounded dialogue systems are prone to generate responses that are factually inconsistent with the provided knowledge source.
Inspired by previous work which identified that feed-forward networks (FFNs) within Transformers are responsible for factual knowledge expressions, we investigate two methods to efficiently improve the factual expression capability.
arXiv Detail & Related papers (2023-10-12T14:44:05Z) - Channel-aware Decoupling Network for Multi-turn Dialogue Comprehension [81.47133615169203]
We propose compositional learning for holistic interaction across utterances beyond the sequential contextualization from PrLMs.
We employ domain-adaptive training strategies to help the model adapt to the dialogue domains.
Experimental results show that our method substantially boosts the strong PrLM baselines in four public benchmark datasets.
arXiv Detail & Related papers (2023-01-10T13:18:25Z) - Target-Guided Dialogue Response Generation Using Commonsense and Data
Augmentation [32.764356638437214]
We introduce a new technique for target-guided response generation.
We also propose techniques to re-purpose existing dialogue datasets for target-guided generation.
Our work generally enables dialogue system designers to exercise more control over the conversations that their systems produce.
arXiv Detail & Related papers (2022-05-19T04:01:40Z) - Graph Based Network with Contextualized Representations of Turns in
Dialogue [0.0]
Dialogue-based relation extraction (RE) aims to extract relation(s) between two arguments that appear in a dialogue.
We propose the TUrn COntext awaRE Graph Convolutional Network (TUCORE-GCN) modeled by paying attention to the way people understand dialogues.
arXiv Detail & Related papers (2021-09-09T03:09:08Z) - Smoothing Dialogue States for Open Conversational Machine Reading [70.83783364292438]
We propose an effective gating strategy by smoothing the two dialogue states in only one decoder and bridge decision making and question generation.
Experiments on the OR-ShARC dataset show the effectiveness of our method, which achieves new state-of-the-art results.
arXiv Detail & Related papers (2021-08-28T08:04:28Z) - Improving Response Quality with Backward Reasoning in Open-domain
Dialogue Systems [53.160025961101354]
We propose to train the generation model in a bidirectional manner by adding a backward reasoning step to the vanilla encoder-decoder training.
The proposed backward reasoning step pushes the model to produce more informative and coherent content.
Our method can improve response quality without introducing side information.
arXiv Detail & Related papers (2021-04-30T20:38:27Z) - Saying No is An Art: Contextualized Fallback Responses for Unanswerable
Dialogue Queries [3.593955557310285]
Most dialogue systems rely on hybrid approaches for generating a set of ranked responses.
We design a neural approach which generates responses which are contextually aware with the user query.
Our simple approach makes use of rules over dependency parses and a text-to-text transformer fine-tuned on synthetic data of question-response pairs.
arXiv Detail & Related papers (2020-12-03T12:34:22Z) - DialogBERT: Discourse-Aware Response Generation via Learning to Recover
and Rank Utterances [18.199473005335093]
This paper presents DialogBERT, a novel conversational response generation model that enhances previous PLM-based dialogue models.
To efficiently capture the discourse-level coherence among utterances, we propose two training objectives, including masked utterance regression.
Experiments on three multi-turn conversation datasets show that our approach remarkably outperforms the baselines.
arXiv Detail & Related papers (2020-12-03T09:06:23Z) - Multi-Stage Conversational Passage Retrieval: An Approach to Fusing Term
Importance Estimation and Neural Query Rewriting [56.268862325167575]
We tackle conversational passage retrieval (ConvPR) with query reformulation integrated into a multi-stage ad-hoc IR system.
We propose two conversational query reformulation (CQR) methods: (1) term importance estimation and (2) neural query rewriting.
For the former, we expand conversational queries using important terms extracted from the conversational context with frequency-based signals.
For the latter, we reformulate conversational queries into natural, standalone, human-understandable queries with a pretrained sequence-tosequence model.
arXiv Detail & Related papers (2020-05-05T14:30:20Z) - A Controllable Model of Grounded Response Generation [122.7121624884747]
Current end-to-end neural conversation models inherently lack the flexibility to impose semantic control in the response generation process.
We propose a framework that we call controllable grounded response generation (CGRG)
We show that using this framework, a transformer based model with a novel inductive attention mechanism, trained on a conversation-like Reddit dataset, outperforms strong generation baselines.
arXiv Detail & Related papers (2020-05-01T21:22:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.