Constraint based Knowledge Base Distillation in End-to-End Task Oriented
Dialogs
- URL: http://arxiv.org/abs/2109.07396v1
- Date: Wed, 15 Sep 2021 16:00:10 GMT
- Title: Constraint based Knowledge Base Distillation in End-to-End Task Oriented
Dialogs
- Authors: Dinesh Raghu, Atishya Jain, Mausam and Sachindra Joshi
- Abstract summary: Task-oriented dialogue systems generate responses based on dialog history and an accompanying knowledge base (KB)
We propose a novel filtering technique that consists of (1) a pairwise similarity based filter that identifies relevant information by respecting the n-ary structure in a KB record.
We also propose a new metric -- multiset entity F1 which fixes a correctness issue in the existing entity F1 metric.
- Score: 23.678209058054062
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: End-to-End task-oriented dialogue systems generate responses based on dialog
history and an accompanying knowledge base (KB). Inferring those KB entities
that are most relevant for an utterance is crucial for response generation.
Existing state of the art scales to large KBs by softly filtering over
irrelevant KB information. In this paper, we propose a novel filtering
technique that consists of (1) a pairwise similarity based filter that
identifies relevant information by respecting the n-ary structure in a KB
record. and, (2) an auxiliary loss that helps in separating contextually
unrelated KB information. We also propose a new metric -- multiset entity F1
which fixes a correctness issue in the existing entity F1 metric. Experimental
results on three publicly available task-oriented dialog datasets show that our
proposed approach outperforms existing state-of-the-art models.
Related papers
- FCC: Fusing Conversation History and Candidate Provenance for Contextual
Response Ranking in Dialogue Systems [53.89014188309486]
We present a flexible neural framework that can integrate contextual information from multiple channels.
We evaluate our model on the MSDialog dataset widely used for evaluating conversational response ranking tasks.
arXiv Detail & Related papers (2023-03-31T23:58:28Z) - TIARA: Multi-grained Retrieval for Robust Question Answering over Large
Knowledge Bases [20.751369684593985]
TIARA outperforms previous SOTA, including those using PLMs or oracle entity annotations, by at least 4.1 and 1.1 F1 points on GrailQA and WebQuestionsSP.
arXiv Detail & Related papers (2022-10-24T02:41:10Z) - QA Is the New KR: Question-Answer Pairs as Knowledge Bases [105.692569000534]
We argue that the proposed type of KB has many of the key advantages of a traditional symbolic KB.
Unlike a traditional KB, this information store is well-aligned with common user information needs.
arXiv Detail & Related papers (2022-07-01T19:09:08Z) - SYGMA: System for Generalizable Modular Question Answering OverKnowledge
Bases [57.89642289610301]
We present SYGMA, a modular approach facilitating general-izability across multiple knowledge bases and multiple rea-soning types.
We demonstrate effectiveness of our system by evaluating on datasets belonging to two distinct knowledge bases,DBpedia and Wikidata.
arXiv Detail & Related papers (2021-09-28T01:57:56Z) - Efficient Contextualization using Top-k Operators for Question Answering
over Knowledge Graphs [24.520002698010856]
This work presents ECQA, an efficient method that prunes irrelevant parts of the search space using KB-aware signals.
Experiments with two recent QA benchmarks demonstrate the superiority of ECQA over state-of-the-art baselines with respect to answer presence, size of the search space, and runtimes.
arXiv Detail & Related papers (2021-08-19T10:06:14Z) - Reasoning Over Virtual Knowledge Bases With Open Predicate Relations [85.19305347984515]
We present the Open Predicate Query Language (OPQL)
OPQL is a method for constructing a virtual Knowledge Base (VKB) trained entirely from text.
We demonstrate that OPQL outperforms prior VKB methods on two different KB reasoning tasks.
arXiv Detail & Related papers (2021-02-14T01:29:54Z) - Contextualize Knowledge Bases with Transformer for End-to-end
Task-Oriented Dialogue Systems [28.347325247064944]
We propose a COntext-aware Memory Enhanced Transformer framework (COMET), which treats the KB as a sequence.
Through extensive experiments, we show that our COMET framework can achieve superior performance over the state of the arts.
arXiv Detail & Related papers (2020-10-12T14:34:07Z) - Learning Knowledge Bases with Parameters for Task-Oriented Dialogue
Systems [79.02430277138801]
The knowledge base (KB) plays an essential role in fulfilling user requests.
End-to-end systems use the KB directly as input, but they cannot scale when the KB is larger than a few hundred entries.
We propose a method to embed the KB, of any size, directly into the model parameters.
arXiv Detail & Related papers (2020-09-28T22:13:54Z) - Unsupervised Learning of KB Queries in Task-Oriented Dialogs [21.611723342957887]
Task-oriented dialog (TOD) systems often need to formulate knowledge base (KB) queries corresponding to the user intent.
Existing approaches require dialog datasets to explicitly annotate these KB queries.
We define the novel problems of predicting the KB query and training the dialog agent, without explicit KB query annotation.
arXiv Detail & Related papers (2020-04-30T22:10:00Z) - Differentiable Reasoning over a Virtual Knowledge Base [156.94984221342716]
We consider the task of answering complex multi-hop questions using a corpus as a virtual knowledge base (KB)
In particular, we describe a neural module, DrKIT, that traverses textual data like a KB, softly following paths of relations between mentions of entities in the corpus.
DrKIT is very efficient, processing 10-100x more queries per second than existing multi-hop systems.
arXiv Detail & Related papers (2020-02-25T03:13:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.