FCM: A Fine-grained Comparison Model for Multi-turn Dialogue Reasoning
- URL: http://arxiv.org/abs/2109.10510v2
- Date: Thu, 23 Sep 2021 13:54:13 GMT
- Title: FCM: A Fine-grained Comparison Model for Multi-turn Dialogue Reasoning
- Authors: Xu Wang, Hainan Zhang, Shuai Zhao, Yanyan Zou, Hongshen Chen, Zhuoye
Ding, Bo Cheng, Yanyan Lan
- Abstract summary: This paper proposes a Fine-grained Comparison Model (FCM) to tackle this problem.
Inspired by human's behavior in reading comprehension, a comparison mechanism is proposed to focus on the fine-grained differences in the representation of each response candidate.
- Score: 44.24589471800725
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the success of neural dialogue systems in achieving high performance
on the leader-board, they cannot meet users' requirements in practice, due to
their poor reasoning skills. The underlying reason is that most neural dialogue
models only capture the syntactic and semantic information, but fail to model
the logical consistency between the dialogue history and the generated
response. Recently, a new multi-turn dialogue reasoning task has been proposed,
to facilitate dialogue reasoning research. However, this task is challenging,
because there are only slight differences between the illogical response and
the dialogue history. How to effectively solve this challenge is still worth
exploring. This paper proposes a Fine-grained Comparison Model (FCM) to tackle
this problem. Inspired by human's behavior in reading comprehension, a
comparison mechanism is proposed to focus on the fine-grained differences in
the representation of each response candidate. Specifically, each candidate
representation is compared with the whole history to obtain a history
consistency representation. Furthermore, the consistency signals between each
candidate and the speaker's own history are considered to drive a model to
prefer a candidate that is logically consistent with the speaker's history
logic. Finally, the above consistency representations are employed to output a
ranking list of the candidate responses for multi-turn dialogue reasoning.
Experimental results on two public dialogue datasets show that our method
obtains higher ranking scores than the baseline models.
Related papers
- Pre-training Multi-party Dialogue Models with Latent Discourse Inference [85.9683181507206]
We pre-train a model that understands the discourse structure of multi-party dialogues, namely, to whom each utterance is replying.
To fully utilize the unlabeled data, we propose to treat the discourse structures as latent variables, then jointly infer them and pre-train the discourse-aware model.
arXiv Detail & Related papers (2023-05-24T14:06:27Z) - Learning from Perturbations: Diverse and Informative Dialogue Generation
with Inverse Adversarial Training [10.17868476063421]
We propose Inverse Adversarial Training (IAT) algorithm for training neural dialogue systems.
IAT encourages the model to be sensitive to the perturbation in the dialogue history and therefore learning from perturbations.
We show that our approach can better model dialogue history and generate more diverse and consistent responses.
arXiv Detail & Related papers (2021-05-31T17:28:37Z) - Refine and Imitate: Reducing Repetition and Inconsistency in Persuasion
Dialogues via Reinforcement Learning and Human Demonstration [45.14559188965439]
We propose to apply reinforcement learning to refine an MLE-based language model without user simulators.
We distill sentence-level information about repetition, inconsistency and task relevance through rewards.
Experiments show that our model outperforms previous state-of-the-art dialogue models on both automatic metrics and human evaluation results.
arXiv Detail & Related papers (2020-12-31T00:02:51Z) - Ranking Enhanced Dialogue Generation [77.8321855074999]
How to effectively utilize the dialogue history is a crucial problem in multi-turn dialogue generation.
Previous works usually employ various neural network architectures to model the history.
This paper proposes a Ranking Enhanced Dialogue generation framework.
arXiv Detail & Related papers (2020-08-13T01:49:56Z) - Is this Dialogue Coherent? Learning from Dialogue Acts and Entities [82.44143808977209]
We create the Switchboard Coherence (SWBD-Coh) corpus, a dataset of human-human spoken dialogues annotated with turn coherence ratings.
Our statistical analysis of the corpus indicates how turn coherence perception is affected by patterns of distribution of entities.
We find that models combining both DA and entity information yield the best performances both for response selection and turn coherence rating.
arXiv Detail & Related papers (2020-06-17T21:02:40Z) - Rethinking Dialogue State Tracking with Reasoning [76.0991910623001]
This paper proposes to track dialogue states gradually with reasoning over dialogue turns with the help of the back-end data.
Empirical results demonstrate that our method significantly outperforms the state-of-the-art methods by 38.6% in terms of joint belief accuracy for MultiWOZ 2.1.
arXiv Detail & Related papers (2020-05-27T02:05:33Z) - Dialogue-Based Relation Extraction [53.2896545819799]
We present the first human-annotated dialogue-based relation extraction (RE) dataset DialogRE.
We argue that speaker-related information plays a critical role in the proposed task, based on an analysis of similarities and differences between dialogue-based and traditional RE tasks.
Experimental results demonstrate that a speaker-aware extension on the best-performing model leads to gains in both the standard and conversational evaluation settings.
arXiv Detail & Related papers (2020-04-17T03:51:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.