PK-Chat: Pointer Network Guided Knowledge Driven Generative Dialogue
Model
- URL: http://arxiv.org/abs/2304.00592v1
- Date: Sun, 2 Apr 2023 18:23:13 GMT
- Title: PK-Chat: Pointer Network Guided Knowledge Driven Generative Dialogue
Model
- Authors: Cheng Deng, Bo Tong, Luoyi Fu, Jiaxin Ding, Dexing Cao, Xinbing Wang,
Chenghu Zhou
- Abstract summary: PK-Chat is a Pointer network guided generative dialogue model, incorporating a unified pretrained language model and a pointer network over knowledge graphs.
The words generated by PK-Chat in the dialogue are derived from the prediction of word lists and the direct prediction of the external knowledge graph knowledge.
Based on the PK-Chat, a dialogue system is built for academic scenarios in the case of geosciences.
- Score: 79.64376762489164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the research of end-to-end dialogue systems, using real-world knowledge to
generate natural, fluent, and human-like utterances with correct answers is
crucial. However, domain-specific conversational dialogue systems may be
incoherent and introduce erroneous external information to answer questions due
to the out-of-vocabulary issue or the wrong knowledge from the parameters of
the neural network. In this work, we propose PK-Chat, a Pointer network guided
Knowledge-driven generative dialogue model, incorporating a unified pretrained
language model and a pointer network over knowledge graphs. The words generated
by PK-Chat in the dialogue are derived from the prediction of word lists and
the direct prediction of the external knowledge graph knowledge. Moreover,
based on the PK-Chat, a dialogue system is built for academic scenarios in the
case of geosciences. Finally, an academic dialogue benchmark is constructed to
evaluate the quality of dialogue systems in academic scenarios and the source
code is available online.
Related papers
- Bridging Information Gaps in Dialogues With Grounded Exchanges Using Knowledge Graphs [4.449835214520727]
We study the potential of large language models for conversational grounding.
Our approach involves annotating human conversations across five knowledge domains to create a new dialogue corpus called BridgeKG.
Our findings offer insights into how these models use in-context learning for conversational grounding tasks and common prediction errors.
arXiv Detail & Related papers (2024-08-02T08:07:15Z) - ChatPLUG: Open-Domain Generative Dialogue System with Internet-Augmented
Instruction Tuning for Digital Human [76.62897301298699]
ChatPLUG is a Chinese open-domain dialogue system for digital human applications that instruction finetunes on a wide range of dialogue tasks in a unified internet-augmented format.
We show that modelname outperforms state-of-the-art Chinese dialogue systems on both automatic and human evaluation.
We deploy modelname to real-world applications such as Smart Speaker and Instant Message applications with fast inference.
arXiv Detail & Related papers (2023-04-16T18:16:35Z) - HybriDialogue: An Information-Seeking Dialogue Dataset Grounded on
Tabular and Textual Data [87.67278915655712]
We present a new dialogue dataset, HybriDialogue, which consists of crowdsourced natural conversations grounded on both Wikipedia text and tables.
The conversations are created through the decomposition of complex multihop questions into simple, realistic multiturn dialogue interactions.
arXiv Detail & Related papers (2022-04-28T00:52:16Z) - Every time I fire a conversational designer, the performance of the
dialog system goes down [0.07696728525672149]
We investigate how the use of explicit domain knowledge of conversational designers affects the performance of neural-based dialogue systems.
We propose the Conversational-Logic-Injection-in-Neural-Network system (CLINN) where explicit knowledge is coded in semi-logical rules.
arXiv Detail & Related papers (2021-09-27T13:05:31Z) - Graph Based Network with Contextualized Representations of Turns in
Dialogue [0.0]
Dialogue-based relation extraction (RE) aims to extract relation(s) between two arguments that appear in a dialogue.
We propose the TUrn COntext awaRE Graph Convolutional Network (TUCORE-GCN) modeled by paying attention to the way people understand dialogues.
arXiv Detail & Related papers (2021-09-09T03:09:08Z) - Recent Advances in Deep Learning-based Dialogue Systems [12.798560005546262]
We mainly focus on the deep learning-based dialogue systems.
This survey is the most comprehensive and upto-date one at present in the area of dialogue systems and dialogue-related tasks.
arXiv Detail & Related papers (2021-05-10T14:07:49Z) - Advances in Multi-turn Dialogue Comprehension: A Survey [51.215629336320305]
We review the previous methods from the perspective of dialogue modeling.
We discuss three typical patterns of dialogue modeling that are widely-used in dialogue comprehension tasks.
arXiv Detail & Related papers (2021-03-04T15:50:17Z) - Learning Reasoning Paths over Semantic Graphs for Video-grounded
Dialogues [73.04906599884868]
We propose a novel framework of Reasoning Paths in Dialogue Context (PDC)
PDC model discovers information flows among dialogue turns through a semantic graph constructed based on lexical components in each question and answer.
Our model sequentially processes both visual and textual information through this reasoning path and the propagated features are used to generate the answer.
arXiv Detail & Related papers (2021-03-01T07:39:26Z) - Probing Neural Dialog Models for Conversational Understanding [21.76744391202041]
We analyze the internal representations learned by neural open-domain dialog systems.
Our results suggest that standard open-domain dialog systems struggle with answering questions.
We also find that the dyadic, turn-taking nature of dialog is not fully leveraged by these models.
arXiv Detail & Related papers (2020-06-07T17:32:00Z) - Knowledge Injection into Dialogue Generation via Language Models [85.65843021510521]
InjK is a two-stage approach to inject knowledge into a dialogue generation model.
First, we train a large-scale language model and query it as textual knowledge.
Second, we frame a dialogue generation model to sequentially generate textual knowledge and a corresponding response.
arXiv Detail & Related papers (2020-04-30T07:31:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.