Multi-turn Response Selection using Dialogue Dependency Relations
- URL: http://arxiv.org/abs/2010.01502v3
- Date: Thu, 30 Nov 2023 06:44:57 GMT
- Title: Multi-turn Response Selection using Dialogue Dependency Relations
- Authors: Qi Jia, Yizhu Liu, Siyu Ren, Kenny Q. Zhu, Haifeng Tang
- Abstract summary: Multi-turn response selection is a task designed for developing dialogue agents.
We propose a dialogue extraction algorithm to transform a dialogue history into threads based on their dependency relations.
Our model outperforms the state-of-the-art baselines on both D7 and DSTC8*, with competitive results on Ubuntu.
- Score: 39.99448321736736
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-turn response selection is a task designed for developing dialogue
agents. The performance on this task has a remarkable improvement with
pre-trained language models. However, these models simply concatenate the turns
in dialogue history as the input and largely ignore the dependencies between
the turns. In this paper, we propose a dialogue extraction algorithm to
transform a dialogue history into threads based on their dependency relations.
Each thread can be regarded as a self-contained sub-dialogue. We also propose
Thread-Encoder model to encode threads and candidates into compact
representations by pre-trained Transformers and finally get the matching score
through an attention layer. The experiments show that dependency relations are
helpful for dialogue context understanding, and our model outperforms the
state-of-the-art baselines on both DSTC7 and DSTC8*, with competitive results
on UbuntuV2.
Related papers
- Contextual Data Augmentation for Task-Oriented Dialog Systems [8.085645180329417]
We develop a novel dialog augmentation model that generates a user turn, conditioning on full dialog context.
With a new prompt design for language model, and output re-ranking, the dialogs generated from our model can be directly used to train downstream dialog systems.
arXiv Detail & Related papers (2023-10-16T13:22:34Z) - Multi-turn Dialogue Comprehension from a Topic-aware Perspective [70.37126956655985]
This paper proposes to model multi-turn dialogues from a topic-aware perspective.
We use a dialogue segmentation algorithm to split a dialogue passage into topic-concentrated fragments in an unsupervised way.
We also present a novel model, Topic-Aware Dual-Attention Matching (TADAM) Network, which takes topic segments as processing elements.
arXiv Detail & Related papers (2023-09-18T11:03:55Z) - Unsupervised Dialogue Topic Segmentation with Topic-aware Utterance
Representation [51.22712675266523]
Dialogue Topic (DTS) plays an essential role in a variety of dialogue modeling tasks.
We propose a novel unsupervised DTS framework, which learns topic-aware utterance representations from unlabeled dialogue data.
arXiv Detail & Related papers (2023-05-04T11:35:23Z) - CTRLStruct: Dialogue Structure Learning for Open-Domain Response
Generation [38.60073402817218]
Well-structured topic flow can leverage background information and predict future topics to help generate controllable and explainable responses.
We present a new framework for dialogue structure learning to effectively explore topic-level dialogue clusters as well as their transitions with unlabelled information.
Experiments on two popular open-domain dialogue datasets show our model can generate more coherent responses compared to some excellent dialogue models.
arXiv Detail & Related papers (2023-03-02T09:27:11Z) - DIONYSUS: A Pre-trained Model for Low-Resource Dialogue Summarization [127.714919036388]
DIONYSUS is a pre-trained encoder-decoder model for summarizing dialogues in any new domain.
Our experiments show that DIONYSUS outperforms existing methods on six datasets.
arXiv Detail & Related papers (2022-12-20T06:21:21Z) - Dialogue History Matters! Personalized Response Selectionin Multi-turn
Retrieval-based Chatbots [62.295373408415365]
We propose a personalized hybrid matching network (PHMN) for context-response matching.
Our contributions are two-fold: 1) our model extracts personalized wording behaviors from user-specific dialogue history as extra matching information.
We evaluate our model on two large datasets with user identification, i.e., personalized dialogue Corpus Ubuntu (P- Ubuntu) and personalized Weibo dataset (P-Weibo)
arXiv Detail & Related papers (2021-03-17T09:42:11Z) - Reasoning in Dialog: Improving Response Generation by Context Reading
Comprehension [49.92173751203827]
In multi-turn dialog, utterances do not always take the full form of sentences.
We propose to improve the response generation performance by examining the model's ability to answer a reading comprehension question.
arXiv Detail & Related papers (2020-12-14T10:58:01Z) - Multi-Domain Dialogue State Tracking based on State Graph [23.828348485513043]
We investigate the problem of multi-domain Dialogue State Tracking (DST) with open vocabulary.
Existing approaches usually previous dialogue state with dialogue history as the input to a bi-directional Transformer encoder.
We propose to construct a dialogue state graph in which domains, slots and values from the previous dialogue state are connected properly.
arXiv Detail & Related papers (2020-10-21T16:55:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.