Promoting Open-domain Dialogue Generation through Learning Pattern
Information between Contexts and Responses
- URL: http://arxiv.org/abs/2309.02823v1
- Date: Wed, 6 Sep 2023 08:11:39 GMT
- Title: Promoting Open-domain Dialogue Generation through Learning Pattern
Information between Contexts and Responses
- Authors: Mengjuan Liu, Chenyang Liu, Yunfan Yang, Jiang Liu, Mohan Jing
- Abstract summary: This paper improves the quality of generated responses by learning the implicit pattern information between contexts and responses in the training samples.
We also design a response-aware mechanism for mining the implicit pattern information between contexts and responses so that the generated replies are more diverse and approximate to human replies.
- Score: 5.936682548344234
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, utilizing deep neural networks to build the opendomain dialogue
models has become a hot topic. However, the responses generated by these models
suffer from many problems such as responses not being contextualized and tend
to generate generic responses that lack information content, damaging the
user's experience seriously. Therefore, many studies try introducing more
information into the dialogue models to make the generated responses more vivid
and informative. Unlike them, this paper improves the quality of generated
responses by learning the implicit pattern information between contexts and
responses in the training samples. In this paper, we first build an open-domain
dialogue model based on the pre-trained language model (i.e., GPT-2). And then,
an improved scheduled sampling method is proposed for pre-trained models, by
which the responses can be used to guide the response generation in the
training phase while avoiding the exposure bias problem. More importantly, we
design a response-aware mechanism for mining the implicit pattern information
between contexts and responses so that the generated replies are more diverse
and approximate to human replies. Finally, we evaluate the proposed model (RAD)
on the Persona-Chat and DailyDialog datasets; and the experimental results show
that our model outperforms the baselines on most automatic and manual metrics.
Related papers
- PICK: Polished & Informed Candidate Scoring for Knowledge-Grounded
Dialogue Systems [59.1250765143521]
Current knowledge-grounded dialogue systems often fail to align the generated responses with human-preferred qualities.
We propose Polished & Informed Candidate Scoring (PICK), a generation re-scoring framework.
We demonstrate the effectiveness of PICK in generating responses that are more faithful while keeping them relevant to the dialogue history.
arXiv Detail & Related papers (2023-09-19T08:27:09Z) - Learning from Perturbations: Diverse and Informative Dialogue Generation
with Inverse Adversarial Training [10.17868476063421]
We propose Inverse Adversarial Training (IAT) algorithm for training neural dialogue systems.
IAT encourages the model to be sensitive to the perturbation in the dialogue history and therefore learning from perturbations.
We show that our approach can better model dialogue history and generate more diverse and consistent responses.
arXiv Detail & Related papers (2021-05-31T17:28:37Z) - Enhancing Dialogue Generation via Multi-Level Contrastive Learning [57.005432249952406]
We propose a multi-level contrastive learning paradigm to model the fine-grained quality of the responses with respect to the query.
A Rank-aware (RC) network is designed to construct the multi-level contrastive optimization objectives.
We build a Knowledge Inference (KI) component to capture the keyword knowledge from the reference during training and exploit such information to encourage the generation of informative words.
arXiv Detail & Related papers (2020-09-19T02:41:04Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z) - Ranking Enhanced Dialogue Generation [77.8321855074999]
How to effectively utilize the dialogue history is a crucial problem in multi-turn dialogue generation.
Previous works usually employ various neural network architectures to model the history.
This paper proposes a Ranking Enhanced Dialogue generation framework.
arXiv Detail & Related papers (2020-08-13T01:49:56Z) - A Controllable Model of Grounded Response Generation [122.7121624884747]
Current end-to-end neural conversation models inherently lack the flexibility to impose semantic control in the response generation process.
We propose a framework that we call controllable grounded response generation (CGRG)
We show that using this framework, a transformer based model with a novel inductive attention mechanism, trained on a conversation-like Reddit dataset, outperforms strong generation baselines.
arXiv Detail & Related papers (2020-05-01T21:22:08Z) - Counterfactual Off-Policy Training for Neural Response Generation [94.76649147381232]
We propose to explore potential responses by counterfactual reasoning.
Training on the counterfactual responses under the adversarial learning framework helps to explore the high-reward area of the potential response space.
An empirical study on the DailyDialog dataset shows that our approach significantly outperforms the HRED model.
arXiv Detail & Related papers (2020-04-29T22:46:28Z) - An Empirical Investigation of Pre-Trained Transformer Language Models
for Open-Domain Dialogue Generation [23.343006562849126]
We present an empirical investigation of pre-trained Transformer-based auto-regressive language models for the task of open-domain dialogue generation.
Training paradigm of pre-training and fine-tuning is employed to conduct learning.
Experiments are conducted on the typical single-turn and multi-turn dialogue corpora such as Weibo, Douban, Reddit, DailyDialog, and Persona-Chat.
arXiv Detail & Related papers (2020-03-09T15:20:21Z) - Posterior-GAN: Towards Informative and Coherent Response Generation with
Posterior Generative Adversarial Network [38.576579498740244]
We propose a novel encoder-decoder based generative adversarial learning framework, Posterior Generative Adversarial Network (Posterior-GAN)
Experimental results demonstrate that our method effectively boosts the informativeness and coherence of the generated response on both automatic and human evaluation.
arXiv Detail & Related papers (2020-03-04T11:57:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.