Unsupervised Learning of KB Queries in Task-Oriented Dialogs
- URL: http://arxiv.org/abs/2005.00123v2
- Date: Thu, 3 Jun 2021 04:27:47 GMT
- Title: Unsupervised Learning of KB Queries in Task-Oriented Dialogs
- Authors: Dinesh Raghu, Nikhil Gupta, Mausam
- Abstract summary: Task-oriented dialog (TOD) systems often need to formulate knowledge base (KB) queries corresponding to the user intent.
Existing approaches require dialog datasets to explicitly annotate these KB queries.
We define the novel problems of predicting the KB query and training the dialog agent, without explicit KB query annotation.
- Score: 21.611723342957887
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Task-oriented dialog (TOD) systems often need to formulate knowledge base
(KB) queries corresponding to the user intent and use the query results to
generate system responses. Existing approaches require dialog datasets to
explicitly annotate these KB queries -- these annotations can be time
consuming, and expensive. In response, we define the novel problems of
predicting the KB query and training the dialog agent, without explicit KB
query annotation. For query prediction, we propose a reinforcement learning
(RL) baseline, which rewards the generation of those queries whose KB results
cover the entities mentioned in subsequent dialog. Further analysis reveals
that correlation among query attributes in KB can significantly confuse memory
augmented policy optimization (MAPO), an existing state of the art RL agent. To
address this, we improve the MAPO baseline with simple but important
modifications suited to our task. To train the full TOD system for our setting,
we propose a pipelined approach: it independently predicts when to make a KB
query (query position predictor), then predicts a KB query at the predicted
position (query predictor), and uses the results of predicted query in
subsequent dialog (next response predictor). Overall, our work proposes first
solutions to our novel problem, and our analysis highlights the research
challenges in training TOD systems without query annotation.
Related papers
- Selecting Query-bag as Pseudo Relevance Feedback for Information-seeking Conversations [76.70349332096693]
Information-seeking dialogue systems are widely used in e-commerce systems.
We propose a Query-bag based Pseudo Relevance Feedback framework (QB-PRF)
It constructs a query-bag with related queries to serve as pseudo signals to guide information-seeking conversations.
arXiv Detail & Related papers (2024-03-22T08:10:32Z) - Bridging the KB-Text Gap: Leveraging Structured Knowledge-aware
Pre-training for KBQA [28.642711264323786]
We propose a Structured Knowledge-aware Pre-training method (SKP) to bridge the gap between texts and structured KBs.
In the pre-training stage, we introduce two novel structured knowledge-aware tasks, guiding the model to effectively learn the implicit relationship and better representations of complex subgraphs.
In the downstream KBQA task, we further design an efficient linearization strategy and an interval attention mechanism, which assist the model to better encode complex subgraphs.
arXiv Detail & Related papers (2023-08-28T09:22:02Z) - Q-TOD: A Query-driven Task-oriented Dialogue System [33.18698942938547]
We introduce a novel query-driven task-oriented dialogue system, namely Q-TOD.
The essential information from the dialogue context is extracted into a query, which is further employed to retrieve relevant knowledge records for response generation.
To evaluate the effectiveness of the proposed Q-TOD, we collect query annotations for three publicly available task-oriented dialogue datasets.
arXiv Detail & Related papers (2022-10-14T06:38:19Z) - SYGMA: System for Generalizable Modular Question Answering OverKnowledge
Bases [57.89642289610301]
We present SYGMA, a modular approach facilitating general-izability across multiple knowledge bases and multiple rea-soning types.
We demonstrate effectiveness of our system by evaluating on datasets belonging to two distinct knowledge bases,DBpedia and Wikidata.
arXiv Detail & Related papers (2021-09-28T01:57:56Z) - Constraint based Knowledge Base Distillation in End-to-End Task Oriented
Dialogs [23.678209058054062]
Task-oriented dialogue systems generate responses based on dialog history and an accompanying knowledge base (KB)
We propose a novel filtering technique that consists of (1) a pairwise similarity based filter that identifies relevant information by respecting the n-ary structure in a KB record.
We also propose a new metric -- multiset entity F1 which fixes a correctness issue in the existing entity F1 metric.
arXiv Detail & Related papers (2021-09-15T16:00:10Z) - Conversational Query Rewriting with Self-supervised Learning [36.392717968127016]
Conversational Query Rewriting (CQR) aims to simplify the multi-turn dialogue modeling into a single-turn problem by explicitly rewriting the conversational query into a self-contained utterance.
Existing approaches rely on massive supervised training data, which is labor-intensive to annotate.
We propose to construct a large-scale CQR dataset automatically via self-supervised learning, which does not need human annotation.
arXiv Detail & Related papers (2021-02-09T08:57:53Z) - Learning Knowledge Bases with Parameters for Task-Oriented Dialogue
Systems [79.02430277138801]
The knowledge base (KB) plays an essential role in fulfilling user requests.
End-to-end systems use the KB directly as input, but they cannot scale when the KB is larger than a few hundred entries.
We propose a method to embed the KB, of any size, directly into the model parameters.
arXiv Detail & Related papers (2020-09-28T22:13:54Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z) - A Survey on Complex Question Answering over Knowledge Base: Recent
Advances and Challenges [71.4531144086568]
Question Answering (QA) over Knowledge Base (KB) aims to automatically answer natural language questions.
Researchers have shifted their attention from simple questions to complex questions, which require more KB triples and constraint inference.
arXiv Detail & Related papers (2020-07-26T07:13:32Z) - Faithful Embeddings for Knowledge Base Queries [97.5904298152163]
deductive closure of an ideal knowledge base (KB) contains exactly the logical queries that the KB can answer.
In practice KBs are both incomplete and over-specified, failing to answer some queries that have real-world answers.
We show that inserting this new QE module into a neural question-answering system leads to substantial improvements over the state-of-the-art.
arXiv Detail & Related papers (2020-04-07T19:25:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.