Stylized Knowledge-Grounded Dialogue Generation via Disentangled
Template Rewriting
- URL: http://arxiv.org/abs/2204.05610v1
- Date: Tue, 12 Apr 2022 08:17:21 GMT
- Title: Stylized Knowledge-Grounded Dialogue Generation via Disentangled
Template Rewriting
- Authors: Qingfeng Sun, Can Xu, Huang Hu, Yujing Wang, Jian Miao, Xiubo Geng,
Yining Chen, Fei Xu, Daxin Jiang
- Abstract summary: We study a new problem: Stylized Knowledge-Grounded Dialogue Generation.
It presents two challenges: How to train a SKDG model where no context, knowledge, stylized response> triples are available.
We propose a novel disentangled template rewriting (DTR) method which generates responses via combing disentangled style templates and content templates.
- Score: 55.10977824136768
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current Knowledge-Grounded Dialogue Generation (KDG) models specialize in
producing rational and factual responses. However, to establish long-term
relationships with users, the KDG model needs the capability to generate
responses in a desired style or attribute. Thus, we study a new problem:
Stylized Knowledge-Grounded Dialogue Generation (SKDG). It presents two
challenges: (1) How to train a SKDG model where no <context, knowledge,
stylized response> triples are available. (2) How to cohere with context and
preserve the knowledge when generating a stylized response. In this paper, we
propose a novel disentangled template rewriting (DTR) method which generates
responses via combing disentangled style templates (from monolingual stylized
corpus) and content templates (from KDG corpus). The entire framework is
end-to-end differentiable and learned without supervision. Extensive
experiments on two benchmarks indicate that DTR achieves a significant
improvement on all evaluation metrics compared with previous state-of-the-art
stylized dialogue generation methods. Besides, DTR achieves comparable
performance with the state-of-the-art KDG methods in standard KDG evaluation
setting.
Related papers
- Raw Text is All you Need: Knowledge-intensive Multi-turn Instruction Tuning for Large Language Model [25.459787361454353]
We present a novel framework named R2S that leverages the CoD-Chain of Dialogue logic to guide large language models (LLMs) in generating knowledge-intensive multi-turn dialogues for instruction tuning.
By integrating raw documents from both open-source datasets and domain-specific web-crawled documents into a benchmark K-BENCH, we cover diverse areas such as Wikipedia (English), Science (Chinese), and Artifacts (Chinese)
arXiv Detail & Related papers (2024-07-03T12:04:10Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - The Whole Truth and Nothing But the Truth: Faithful and Controllable
Dialogue Response Generation with Dataflow Transduction and Constrained
Decoding [65.34601470417967]
We describe a hybrid architecture for dialogue response generation that combines the strengths of neural language modeling and rule-based generation.
Our experiments show that this system outperforms both rule-based and learned approaches in human evaluations of fluency, relevance, and truthfulness.
arXiv Detail & Related papers (2022-09-16T09:00:49Z) - RSTGen: Imbuing Fine-Grained Interpretable Control into Long-FormText
Generators [26.27412809287025]
RSTGen is a framework that controls the discourse structure, semantics and topics of generated text.
We demonstrate our model's ability to control structural discourse and semantic features of generated text in open generation evaluation.
arXiv Detail & Related papers (2022-05-25T09:06:04Z) - Learning to Express in Knowledge-Grounded Conversation [62.338124154016825]
We consider two aspects of knowledge expression, namely the structure of the response and style of the content in each part.
We propose a segmentation-based generation model and optimize the model by a variational approach to discover the underlying pattern of knowledge expression in a response.
arXiv Detail & Related papers (2022-04-12T13:43:47Z) - Language Model as an Annotator: Exploring DialoGPT for Dialogue
Summarization [29.887562761942114]
We show how DialoGPT, a pre-trained model for conversational response generation, can be developed as an unsupervised dialogue annotator.
We apply DialoGPT to label three types of features on two dialogue summarization datasets, SAMSum and AMI, and employ pre-trained and non pre-trained models as our summarizes.
arXiv Detail & Related papers (2021-05-26T13:50:13Z) - Knowledge-based Review Generation by Coherence Enhanced Text Planning [45.473253542837995]
We propose a novel Coherence Enhanced Text Planning model (CETP) based on knowledge graphs (KGs) to improve both global and local coherence for review generation.
For global coherence, we design a hierarchical self-attentive architecture with both subgraph- and node-level attention to enhance the correlations between subgraphs.
Experiments on three datasets confirm the effectiveness of our model on improving the content coherence of generated texts.
arXiv Detail & Related papers (2021-05-09T02:12:05Z) - Modelling Hierarchical Structure between Dialogue Policy and Natural
Language Generator with Option Framework for Task-oriented Dialogue System [49.39150449455407]
HDNO is an option framework for designing latent dialogue acts to avoid designing specific dialogue act representations.
We test HDNO on MultiWoz 2.0 and MultiWoz 2.1, the datasets on multi-domain dialogues, in comparison with word-level E2E model trained with RL, LaRL and HDSA.
arXiv Detail & Related papers (2020-06-11T20:55:28Z) - Prototype-to-Style: Dialogue Generation with Style-Aware Editing on
Retrieval Memory [65.98002918470543]
We introduce a new prototype-to-style framework to tackle the challenge of stylistic dialogue generation.
The framework uses an Information Retrieval (IR) system and extracts a response prototype from the retrieved response.
A stylistic response generator then takes the prototype and the desired language style as model input to obtain a high-quality and stylistic response.
arXiv Detail & Related papers (2020-04-05T14:36:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.