Learning Knowledge Bases with Parameters for Task-Oriented Dialogue
Systems
- URL: http://arxiv.org/abs/2009.13656v1
- Date: Mon, 28 Sep 2020 22:13:54 GMT
- Title: Learning Knowledge Bases with Parameters for Task-Oriented Dialogue
Systems
- Authors: Andrea Madotto, Samuel Cahyawijaya, Genta Indra Winata, Yan Xu, Zihan
Liu, Zhaojiang Lin, Pascale Fung
- Abstract summary: The knowledge base (KB) plays an essential role in fulfilling user requests.
End-to-end systems use the KB directly as input, but they cannot scale when the KB is larger than a few hundred entries.
We propose a method to embed the KB, of any size, directly into the model parameters.
- Score: 79.02430277138801
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Task-oriented dialogue systems are either modularized with separate dialogue
state tracking (DST) and management steps or end-to-end trainable. In either
case, the knowledge base (KB) plays an essential role in fulfilling user
requests. Modularized systems rely on DST to interact with the KB, which is
expensive in terms of annotation and inference time. End-to-end systems use the
KB directly as input, but they cannot scale when the KB is larger than a few
hundred entries. In this paper, we propose a method to embed the KB, of any
size, directly into the model parameters. The resulting model does not require
any DST or template responses, nor the KB as input, and it can dynamically
update its KB via fine-tuning. We evaluate our solution in five task-oriented
dialogue datasets with small, medium, and large KB size. Our experiments show
that end-to-end models can effectively embed knowledge bases in their
parameters and achieve competitive performance in all evaluated datasets.
Related papers
- KBLaM: Knowledge Base augmented Language Model [8.247901935078357]
We propose Knowledge Base augmented Language Model (KBLaM) for augmenting Large Language Models with external knowledge.
KBLaM works with a knowledge base constructed from a corpus of documents, transforming each piece of knowledge in the KB into continuous key-value vector pairs.
Experiments demonstrate KBLaM's effectiveness in various tasks, including question-answering and open-ended reasoning.
arXiv Detail & Related papers (2024-10-14T12:45:10Z) - DKAF: KB Arbitration for Learning Task-Oriented Dialog Systems with
Dialog-KB Inconsistencies [17.228046533234192]
Task-oriented dialog (TOD) agents often ground their responses on external knowledge bases (KBs)
Existing approaches for learning TOD agents assume the KB snapshot contemporary to each individual dialog is available during training.
We propose a Dialog-KB Arbitration Framework (DKAF) which reduces the dialog-KB inconsistencies by predicting the contemporary KB snapshot for each train dialog.
arXiv Detail & Related papers (2023-05-26T07:36:23Z) - CGoDial: A Large-Scale Benchmark for Chinese Goal-oriented Dialog
Evaluation [75.60156479374416]
CGoDial is a new challenging and comprehensive Chinese benchmark for Goal-oriented Dialog evaluation.
It contains 96,763 dialog sessions and 574,949 dialog turns totally, covering three datasets with different knowledge sources.
To bridge the gap between academic benchmarks and spoken dialog scenarios, we either collect data from real conversations or add spoken features to existing datasets via crowd-sourcing.
arXiv Detail & Related papers (2022-11-21T16:21:41Z) - Prompt Learning for Few-Shot Dialogue State Tracking [75.50701890035154]
This paper focuses on how to learn a dialogue state tracking (DST) model efficiently with limited labeled data.
We design a prompt learning framework for few-shot DST, which consists of two main components: value-based prompt and inverse prompt mechanism.
Experiments show that our model can generate unseen slots and outperforms existing state-of-the-art few-shot methods.
arXiv Detail & Related papers (2022-01-15T07:37:33Z) - SYGMA: System for Generalizable Modular Question Answering OverKnowledge
Bases [57.89642289610301]
We present SYGMA, a modular approach facilitating general-izability across multiple knowledge bases and multiple rea-soning types.
We demonstrate effectiveness of our system by evaluating on datasets belonging to two distinct knowledge bases,DBpedia and Wikidata.
arXiv Detail & Related papers (2021-09-28T01:57:56Z) - Constraint based Knowledge Base Distillation in End-to-End Task Oriented
Dialogs [23.678209058054062]
Task-oriented dialogue systems generate responses based on dialog history and an accompanying knowledge base (KB)
We propose a novel filtering technique that consists of (1) a pairwise similarity based filter that identifies relevant information by respecting the n-ary structure in a KB record.
We also propose a new metric -- multiset entity F1 which fixes a correctness issue in the existing entity F1 metric.
arXiv Detail & Related papers (2021-09-15T16:00:10Z) - Probabilistic Case-based Reasoning for Open-World Knowledge Graph
Completion [59.549664231655726]
A case-based reasoning (CBR) system solves a new problem by retrieving cases' that are similar to the given problem.
In this paper, we demonstrate that such a system is achievable for reasoning in knowledge-bases (KBs)
Our approach predicts attributes for an entity by gathering reasoning paths from similar entities in the KB.
arXiv Detail & Related papers (2020-10-07T17:48:12Z) - Unsupervised Learning of KB Queries in Task-Oriented Dialogs [21.611723342957887]
Task-oriented dialog (TOD) systems often need to formulate knowledge base (KB) queries corresponding to the user intent.
Existing approaches require dialog datasets to explicitly annotate these KB queries.
We define the novel problems of predicting the KB query and training the dialog agent, without explicit KB query annotation.
arXiv Detail & Related papers (2020-04-30T22:10:00Z) - Differentiable Reasoning over a Virtual Knowledge Base [156.94984221342716]
We consider the task of answering complex multi-hop questions using a corpus as a virtual knowledge base (KB)
In particular, we describe a neural module, DrKIT, that traverses textual data like a KB, softly following paths of relations between mentions of entities in the corpus.
DrKIT is very efficient, processing 10-100x more queries per second than existing multi-hop systems.
arXiv Detail & Related papers (2020-02-25T03:13:32Z) - Scalable Neural Methods for Reasoning With a Symbolic Knowledge Base [34.837700505583]
We describe a novel way of representing a symbolic knowledge base (KB) called a sparse-matrix reified KB.
This representation enables neural modules that are fully differentiable, faithful to the original semantics of the KB, expressive enough to model multi-hop inferences, and scalable enough to use with realistically large KBs.
arXiv Detail & Related papers (2020-02-14T16:32:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.