Graph vs. Sequence: An Empirical Study on Knowledge Forms for
Knowledge-Grounded Dialogue
- URL: http://arxiv.org/abs/2312.07868v1
- Date: Wed, 13 Dec 2023 03:16:33 GMT
- Title: Graph vs. Sequence: An Empirical Study on Knowledge Forms for
Knowledge-Grounded Dialogue
- Authors: Yizhe Yang, Heyan Huang, Yihang Liu, Yang Gao
- Abstract summary: We conduct a thorough experiment and study on the task to answer three essential questions.
The questions involve the choice of appropriate knowledge form, the degree of mutual effects between knowledge and the model selection, and the few-shot performance of knowledge.
- Score: 45.36967792307907
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge-grounded dialogue is a task of generating an informative response
based on both the dialogue history and external knowledge source. In general,
there are two forms of knowledge: manually annotated knowledge graphs and
knowledge text from website. From various evaluation viewpoints, each type of
knowledge has advantages and downsides. To further distinguish the principles
and determinants from the intricate factors, we conduct a thorough experiment
and study on the task to answer three essential questions. The questions
involve the choice of appropriate knowledge form, the degree of mutual effects
between knowledge and the model selection, and the few-shot performance of
knowledge. Supported by statistical shreds of evidence, we offer conclusive
solutions and sensible suggestions for directions and standards of future
research.
Related papers
- Knowledge Condensation and Reasoning for Knowledge-based VQA [20.808840633377343]
Recent studies retrieve the knowledge passages from external knowledge bases and then use them to answer questions.
We propose two synergistic models: Knowledge Condensation model and Knowledge Reasoning model.
Our method achieves state-of-the-art performance on knowledge-based VQA datasets.
arXiv Detail & Related papers (2024-03-15T06:06:06Z) - DialoKG: Knowledge-Structure Aware Task-Oriented Dialogue Generation [9.186215038100904]
We propose DialoKG, a novel task-oriented dialogue system that effectively incorporates knowledge into a language model.
Our proposed system views relational knowledge as a knowledge graph and introduces a structure-aware knowledge embedding technique.
An empirical evaluation demonstrates the effectiveness of DialoKG over state-of-the-art methods on several standard benchmark datasets.
arXiv Detail & Related papers (2022-04-19T22:26:18Z) - Knowledge-Grounded Dialogue Generation with a Unified Knowledge
Representation [78.85622982191522]
Existing systems perform poorly on unseen topics due to limited topics covered in the training data.
We present PLUG, a language model that homogenizes different knowledge sources to a unified knowledge representation.
It can achieve comparable performance with state-of-the-art methods under a fully-supervised setting.
arXiv Detail & Related papers (2021-12-15T07:11:02Z) - Knowledge-Grounded Dialogue with Reward-Driven Knowledge Selection [1.1633929083694388]
Knoformer is a dialogue response generation model based on reinforcement learning.
It can automatically select one or more related knowledge from the knowledge pool and does not need knowledge labels during training.
arXiv Detail & Related papers (2021-08-31T08:53:08Z) - Prediction, Selection, and Generation: Exploration of Knowledge-Driven
Conversation System [24.537862151735006]
In open-domain conversational systems, it is important but challenging to leverage background knowledge.
We combine the knowledge bases and pre-training model to propose a knowledge-driven conversation system.
We study the performance factors that maybe affect the generation of knowledge-driven dialogue.
arXiv Detail & Related papers (2021-04-23T07:59:55Z) - Multi-turn Dialogue Reading Comprehension with Pivot Turns and Knowledge [43.352833140317486]
Multi-turn dialogue reading comprehension aims to teach machines to read dialogue contexts and solve tasks such as response selection and answering questions.
This work makes the first attempt to tackle the above two challenges by extracting substantially important turns as pivot utterances.
We propose a pivot-oriented deep selection model (PoDS) on top of the Transformer-based language models for dialogue comprehension.
arXiv Detail & Related papers (2021-02-10T15:00:12Z) - KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain
Knowledge-Based VQA [107.7091094498848]
One of the most challenging question types in VQA is when answering the question requires outside knowledge not present in the image.
In this work we study open-domain knowledge, the setting when the knowledge required to answer a question is not given/annotated, neither at training nor test time.
We tap into two types of knowledge representations and reasoning. First, implicit knowledge which can be learned effectively from unsupervised language pre-training and supervised training data with transformer-based models.
arXiv Detail & Related papers (2020-12-20T20:13:02Z) - Difference-aware Knowledge Selection for Knowledge-grounded Conversation
Generation [101.48602006200409]
We propose a difference-aware knowledge selection method for multi-turn knowledge-grounded dialogs.
It first computes the difference between the candidate knowledge sentences provided at the current turn and those chosen in the previous turns.
Then, the differential information is fused with or disentangled from the contextual information to facilitate final knowledge selection.
arXiv Detail & Related papers (2020-09-20T07:47:26Z) - Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue [51.513276162736844]
We propose a sequential latent variable model as the first approach to this matter.
The model named sequential knowledge transformer (SKT) can keep track of the prior and posterior distribution over knowledge.
arXiv Detail & Related papers (2020-02-18T11:59:59Z) - A Survey on Knowledge Graphs: Representation, Acquisition and
Applications [89.78089494738002]
We review research topics about 1) knowledge graph representation learning, 2) knowledge acquisition and completion, 3) temporal knowledge graph, and 4) knowledge-aware applications.
For knowledge acquisition, especially knowledge graph completion, embedding methods, path inference, and logical rule reasoning, are reviewed.
We explore several emerging topics, including meta learning, commonsense reasoning, and temporal knowledge graphs.
arXiv Detail & Related papers (2020-02-02T13:17:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.