GKS: Graph-based Knowledge Selector for Task-oriented Dialog System
- URL: http://arxiv.org/abs/2112.03719v1
- Date: Tue, 7 Dec 2021 14:16:26 GMT
- Title: GKS: Graph-based Knowledge Selector for Task-oriented Dialog System
- Authors: Jen-Chieh Yang, Jia-Yan Wu, Sung-Ping Chang, Ya-Chieh Huang
- Abstract summary: Graph-Knowledge Selector (GKS) outperforms SOTA models proposed in the data-set on knowledge selection from the 9th Dialog System Technology Challenges (DSTC9)
GKS makes knowledge selection decisions in the dialog by simultaneously considering each knowledge embedding generated from the language model, without sequential features.
- Score: 0.688204255655161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In previous research, knowledge selection tasks mostly rely on language
model-based methods or knowledge ranking. However, approaches simply rely on
the language model take all knowledge as sequential input that knowledge does
not contain sequential information in most circumstances. On the other hand,
the knowledge ranking method leverage dialog history and each given knowledge
but not between pieces of knowledge. In the 10th Dialog System Technology
Challenges (DSTC 10), we participated the second track of Knowledge-grounded
Task-oriented Dialogue Modeling on Spoken Conversations. To deal with the
problems mentioned above, we modified training methods based on SOTA models for
the first and third sub-tasks and proposed Graph-Knowledge Selector (GKS),
utilizing a graph-attention base model incorporated with language model for
knowledge selection sub-task two. GKS makes knowledge selection decisions in
the dialog by simultaneously considering each knowledge embedding generated
from the language model, without sequential features. GKS also leverages
considerable knowledge in the decision-making, takes relations across knowledge
as a part of the selection process. GKS outperforms several SOTA models
proposed in the data-set on knowledge selection from the 9th Dialog System
Technology Challenges (DSTC9).
Related papers
- Task Oriented Conversational Modelling With Subjective Knowledge [0.0]
DSTC-11 proposes a three stage pipeline consisting of knowledge seeking turn detection, knowledge selection and response generation.
We propose entity retrieval methods which result in an accurate and faster knowledge search.
Preliminary results show a 4 % improvement in exact match score on knowledge selection task.
arXiv Detail & Related papers (2023-03-30T20:23:49Z) - Multimodal Dialog Systems with Dual Knowledge-enhanced Generative Pretrained Language Model [63.461030694700014]
We propose a novel dual knowledge-enhanced generative pretrained language model for multimodal task-oriented dialog systems (DKMD)
The proposed DKMD consists of three key components: dual knowledge selection, dual knowledge-enhanced context learning, and knowledge-enhanced response generation.
Experiments on a public dataset verify the superiority of the proposed DKMD over state-of-the-art competitors.
arXiv Detail & Related papers (2022-07-16T13:02:54Z) - DialoKG: Knowledge-Structure Aware Task-Oriented Dialogue Generation [9.186215038100904]
We propose DialoKG, a novel task-oriented dialogue system that effectively incorporates knowledge into a language model.
Our proposed system views relational knowledge as a knowledge graph and introduces a structure-aware knowledge embedding technique.
An empirical evaluation demonstrates the effectiveness of DialoKG over state-of-the-art methods on several standard benchmark datasets.
arXiv Detail & Related papers (2022-04-19T22:26:18Z) - TegTok: Augmenting Text Generation via Task-specific and Open-world
Knowledge [83.55215993730326]
We propose augmenting TExt Generation via Task-specific and Open-world Knowledge (TegTok) in a unified framework.
Our model selects knowledge entries from two types of knowledge sources through dense retrieval and then injects them into the input encoding and output decoding stages respectively.
arXiv Detail & Related papers (2022-03-16T10:37:59Z) - Knowledge-Grounded Dialogue Generation with a Unified Knowledge
Representation [78.85622982191522]
Existing systems perform poorly on unseen topics due to limited topics covered in the training data.
We present PLUG, a language model that homogenizes different knowledge sources to a unified knowledge representation.
It can achieve comparable performance with state-of-the-art methods under a fully-supervised setting.
arXiv Detail & Related papers (2021-12-15T07:11:02Z) - A Knowledge-Grounded Dialog System Based on Pre-Trained Language Models [0.7699714865575189]
We present a knowledge-grounded dialog system developed for the ninth Dialog System Technology Challenge (DSTC9)
We leverage transfer learning with existing language models to accomplish the tasks in this challenge track.
arXiv Detail & Related papers (2021-06-28T07:56:10Z) - Contextualized Knowledge-aware Attentive Neural Network: Enhancing
Answer Selection with Knowledge [77.77684299758494]
We extensively investigate approaches to enhancing the answer selection model with external knowledge from knowledge graph (KG)
First, we present a context-knowledge interaction learning framework, Knowledge-aware Neural Network (KNN), which learns the QA sentence representations by considering a tight interaction with the external knowledge from KG and the textual information.
To handle the diversity and complexity of KG information, we propose a Contextualized Knowledge-aware Attentive Neural Network (CKANN), which improves the knowledge representation learning with structure information via a customized Graph Convolutional Network (GCN) and comprehensively learns context-based and knowledge-based sentence representation via
arXiv Detail & Related papers (2021-04-12T05:52:20Z) - Learning to Retrieve Entity-Aware Knowledge and Generate Responses with
Copy Mechanism for Task-Oriented Dialogue Systems [43.57597820119909]
Task-oriented conversational modeling with unstructured knowledge access, as track 1 of the 9th Dialogue System Technology Challenges (DSTC 9)
This challenge can be separated into three subtasks, (1) knowledge-seeking turn detection, (2) knowledge selection, and (3) knowledge-grounded response generation.
We use pre-trained language models, ELECTRA and RoBERTa, as our base encoder for different subtasks.
arXiv Detail & Related papers (2020-12-22T11:36:37Z) - Difference-aware Knowledge Selection for Knowledge-grounded Conversation
Generation [101.48602006200409]
We propose a difference-aware knowledge selection method for multi-turn knowledge-grounded dialogs.
It first computes the difference between the candidate knowledge sentences provided at the current turn and those chosen in the previous turns.
Then, the differential information is fused with or disentangled from the contextual information to facilitate final knowledge selection.
arXiv Detail & Related papers (2020-09-20T07:47:26Z) - Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue [51.513276162736844]
We propose a sequential latent variable model as the first approach to this matter.
The model named sequential knowledge transformer (SKT) can keep track of the prior and posterior distribution over knowledge.
arXiv Detail & Related papers (2020-02-18T11:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.