RT-KGD: Relation Transition Aware Knowledge-Grounded Dialogue Generation
- URL: http://arxiv.org/abs/2207.08212v1
- Date: Sun, 17 Jul 2022 16:07:38 GMT
- Title: RT-KGD: Relation Transition Aware Knowledge-Grounded Dialogue Generation
- Authors: Kexin Wang, Zhixu Li, Jiaan Wang, Jianfeng Qu, Ying He, An Liu, Lei
Zhao
- Abstract summary: We propose a Relation Transition aware Knowledge-Grounded Dialogue Generation model (RT-KGD)
Specifically, inspired by the latent logic of human conversation, our model integrates dialogue-level relation transition regularities with turn-level entity semantic information.
In this manner, the interaction between knowledge is considered to produce abundant clues for predicting the appropriate knowledge and generating coherent responses.
- Score: 20.37399983466163
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Grounding dialogue system with external knowledge is a promising way to
improve the quality of responses. Most existing works adopt knowledge graphs
(KGs) as the external resources, paying attention to the contribution of
entities in the last utterance of the dialogue for context understanding and
response generation. Nevertheless, the correlations between knowledge implied
in the multi-turn context and the transition regularities between relations in
KGs are under-explored. To this end, we propose a Relation Transition aware
Knowledge-Grounded Dialogue Generation model (RT-KGD). Specifically, inspired
by the latent logic of human conversation, our model integrates dialogue-level
relation transition regularities with turn-level entity semantic information.
In this manner, the interaction between knowledge is considered to produce
abundant clues for predicting the appropriate knowledge and generating coherent
responses. The experimental results on both automatic evaluation and manual
evaluation indicate that our model outperforms state-of-the-art baselines.
Related papers
- Bridging Information Gaps in Dialogues With Grounded Exchanges Using Knowledge Graphs [4.449835214520727]
We study the potential of large language models for conversational grounding.
Our approach involves annotating human conversations across five knowledge domains to create a new dialogue corpus called BridgeKG.
Our findings offer insights into how these models use in-context learning for conversational grounding tasks and common prediction errors.
arXiv Detail & Related papers (2024-08-02T08:07:15Z) - Improving Factual Consistency for Knowledge-Grounded Dialogue Systems
via Knowledge Enhancement and Alignment [77.56326872997407]
Pretrained language models (PLMs) based knowledge-grounded dialogue systems are prone to generate responses that are factually inconsistent with the provided knowledge source.
Inspired by previous work which identified that feed-forward networks (FFNs) within Transformers are responsible for factual knowledge expressions, we investigate two methods to efficiently improve the factual expression capability.
arXiv Detail & Related papers (2023-10-12T14:44:05Z) - PICK: Polished & Informed Candidate Scoring for Knowledge-Grounded
Dialogue Systems [59.1250765143521]
Current knowledge-grounded dialogue systems often fail to align the generated responses with human-preferred qualities.
We propose Polished & Informed Candidate Scoring (PICK), a generation re-scoring framework.
We demonstrate the effectiveness of PICK in generating responses that are more faithful while keeping them relevant to the dialogue history.
arXiv Detail & Related papers (2023-09-19T08:27:09Z) - Knowledge Graph-Augmented Language Models for Knowledge-Grounded
Dialogue Generation [58.65698688443091]
We propose SUbgraph Retrieval-augmented GEneration (SURGE), a framework for generating context-relevant and knowledge-grounded dialogues with Knowledge Graphs (KGs)
Our framework first retrieves the relevant subgraph from the KG, and then enforces consistency across facts by perturbing their word embeddings conditioned by the retrieved subgraph.
We validate our SURGE framework on OpendialKG and KOMODIS datasets, showing that it generates high-quality dialogues that faithfully reflect the knowledge from KG.
arXiv Detail & Related papers (2023-05-30T08:36:45Z) - Contextual Knowledge Learning For Dialogue Generation [13.671946960656467]
We present a novel approach to context and knowledge weighting as an integral part of model training.
We guide the model training through a Contextual Knowledge Learning process which involves Latent Vectors for context and knowledge.
arXiv Detail & Related papers (2023-05-29T16:54:10Z) - PK-Chat: Pointer Network Guided Knowledge Driven Generative Dialogue
Model [79.64376762489164]
PK-Chat is a Pointer network guided generative dialogue model, incorporating a unified pretrained language model and a pointer network over knowledge graphs.
The words generated by PK-Chat in the dialogue are derived from the prediction of word lists and the direct prediction of the external knowledge graph knowledge.
Based on the PK-Chat, a dialogue system is built for academic scenarios in the case of geosciences.
arXiv Detail & Related papers (2023-04-02T18:23:13Z) - Variational Reasoning over Incomplete Knowledge Graphs for
Conversational Recommendation [48.70062671767362]
We propose the Variational Reasoning over Incomplete KGs Conversational Recommender (VRICR)
Our key idea is to incorporate the large dialogue corpus naturally accompanied with CRSs to enhance the incomplete KGs.
We also denote the dialogue-specific subgraphs of KGs as latent variables with categorical priors for adaptive knowledge graphs.
arXiv Detail & Related papers (2022-12-22T17:02:21Z) - Topic-Aware Response Generation in Task-Oriented Dialogue with
Unstructured Knowledge Access [20.881612071473118]
We propose Topic-Aware Response Generation'' (TARG) to better integrate topical information in task-oriented dialogue.
TARG incorporates multiple topic-aware attention mechanisms to derive the importance weighting scheme over dialogue utterances and external knowledge sources.
arXiv Detail & Related papers (2022-12-10T22:32:28Z) - RHO ($\rho$): Reducing Hallucination in Open-domain Dialogues with
Knowledge Grounding [57.46495388734495]
This paper presents RHO ($rho$) utilizing the representations of linked entities and relation predicates from a knowledge graph (KG)
We propose (1) local knowledge grounding to combine textual embeddings with the corresponding KG embeddings; and (2) global knowledge grounding to equip RHO with multi-hop reasoning abilities via the attention mechanism.
arXiv Detail & Related papers (2022-12-03T10:36:34Z) - Zero-Resource Knowledge-Grounded Dialogue Generation [29.357221039484568]
We propose representing the knowledge that bridges a context and a response and the way that the knowledge is expressed as latent variables.
We show that our model can achieve comparable performance with state-of-the-art methods that rely on knowledge-grounded dialogues for training.
arXiv Detail & Related papers (2020-08-29T05:48:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.