X-ReCoSa: Multi-Scale Context Aggregation For Multi-Turn Dialogue
Generation
- URL: http://arxiv.org/abs/2303.07833v1
- Date: Tue, 14 Mar 2023 12:15:52 GMT
- Title: X-ReCoSa: Multi-Scale Context Aggregation For Multi-Turn Dialogue
Generation
- Authors: Danqin Wu
- Abstract summary: In multi-turn dialogue generation, responses are not only related to the topic and background of the context but also related to words and phrases in the sentences of the context.
Currently widely used hierarchical dialog models solely rely on context representations from the utterance-level encoder, ignoring the sentence representations output by the word-level encoder.
We propose a new dialog model X-ReCoSa to tackle this problem which aggregates multi-scale context information for hierarchical dialog models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In multi-turn dialogue generation, responses are not only related to the
topic and background of the context but also related to words and phrases in
the sentences of the context. However, currently widely used hierarchical
dialog models solely rely on context representations from the utterance-level
encoder, ignoring the sentence representations output by the word-level
encoder. This inevitably results in a loss of information while decoding and
generating. In this paper, we propose a new dialog model X-ReCoSa to tackle
this problem which aggregates multi-scale context information for hierarchical
dialog models. Specifically, we divide the generation decoder into upper and
lower parts, namely the intention part and the generation part. Firstly, the
intention part takes context representations as input to generate the intention
of the response. Then the generation part generates words depending on sentence
representations. Therefore, the hierarchical information has been fused into
response generation. we conduct experiments on the English dataset DailyDialog.
Experimental results exhibit that our method outperforms baseline models on
both automatic metric-based and human-based evaluations.
Related papers
- A Stack-Propagation Framework for Low-Resource Personalized Dialogue Generation [29.348053519918928]
We propose a novel stack-propagation framework for learning a dialogue generation and understanding pipeline.
The proposed framework can benefit from the stacked encoder and decoders to learn from much smaller personalized dialogue data.
arXiv Detail & Related papers (2024-10-26T13:09:21Z) - DialoGen: Generalized Long-Range Context Representation for Dialogue
Systems [36.23733762476647]
We propose DialoGen, a novel framework for dialogue generation with a generalized context representation.
We study the effectiveness of our proposed method on both dialogue generation (open-domain) and understanding (DST)
arXiv Detail & Related papers (2022-10-12T15:05:28Z) - The Whole Truth and Nothing But the Truth: Faithful and Controllable
Dialogue Response Generation with Dataflow Transduction and Constrained
Decoding [65.34601470417967]
We describe a hybrid architecture for dialogue response generation that combines the strengths of neural language modeling and rule-based generation.
Our experiments show that this system outperforms both rule-based and learned approaches in human evaluations of fluency, relevance, and truthfulness.
arXiv Detail & Related papers (2022-09-16T09:00:49Z) - A Speaker-aware Parallel Hierarchical Attentive Encoder-Decoder Model
for Multi-turn Dialogue Generation [13.820298189734686]
This paper presents a novel open-domain dialogue generation model emphasizing the differentiation of speakers in multi-turn conversations.
Our empirical results show that PHAED outperforms the state-of-the-art in both automatic and human evaluations.
arXiv Detail & Related papers (2021-10-13T16:08:29Z) - Reasoning in Dialog: Improving Response Generation by Context Reading
Comprehension [49.92173751203827]
In multi-turn dialog, utterances do not always take the full form of sentences.
We propose to improve the response generation performance by examining the model's ability to answer a reading comprehension question.
arXiv Detail & Related papers (2020-12-14T10:58:01Z) - Generating Dialogue Responses from a Semantic Latent Space [75.18449428414736]
We propose an alternative to the end-to-end classification on vocabulary.
We learn the pair relationship between the prompts and responses as a regression task on a latent space.
Human evaluation showed that learning the task on a continuous space can generate responses that are both relevant and informative.
arXiv Detail & Related papers (2020-10-04T19:06:16Z) - Controlling Dialogue Generation with Semantic Exemplars [55.460082747572734]
We present an Exemplar-based Dialogue Generation model, EDGE, that uses the semantic frames present in exemplar responses to guide generation.
We show that controlling dialogue generation based on the semantic frames of exemplars, rather than words in the exemplar itself, improves the coherence of generated responses.
arXiv Detail & Related papers (2020-08-20T17:02:37Z) - Ranking Enhanced Dialogue Generation [77.8321855074999]
How to effectively utilize the dialogue history is a crucial problem in multi-turn dialogue generation.
Previous works usually employ various neural network architectures to model the history.
This paper proposes a Ranking Enhanced Dialogue generation framework.
arXiv Detail & Related papers (2020-08-13T01:49:56Z) - Diversifying Dialogue Generation with Non-Conversational Text [38.03510529185192]
We propose a new perspective to diversify dialogue generation by leveraging non-conversational text.
We collect a large-scale non-conversational corpus from multi sources including forum comments, idioms and book snippets.
The resulting model is tested on two conversational datasets and is shown to produce significantly more diverse responses without sacrificing the relevance with context.
arXiv Detail & Related papers (2020-05-09T02:16:05Z) - Paraphrase Augmented Task-Oriented Dialog Generation [68.1790912977053]
We propose a paraphrase augmented response generation (PARG) framework that jointly trains a paraphrase model and a response generation model.
We also design a method to automatically construct paraphrase training data set based on dialog state and dialog act labels.
arXiv Detail & Related papers (2020-04-16T05:12:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.