DKAF: KB Arbitration for Learning Task-Oriented Dialog Systems with
Dialog-KB Inconsistencies
- URL: http://arxiv.org/abs/2305.16697v1
- Date: Fri, 26 May 2023 07:36:23 GMT
- Title: DKAF: KB Arbitration for Learning Task-Oriented Dialog Systems with
Dialog-KB Inconsistencies
- Authors: Vishal Vivek Saley, Rocktim Jyoti Das, Dinesh Raghu, Mausam
- Abstract summary: Task-oriented dialog (TOD) agents often ground their responses on external knowledge bases (KBs)
Existing approaches for learning TOD agents assume the KB snapshot contemporary to each individual dialog is available during training.
We propose a Dialog-KB Arbitration Framework (DKAF) which reduces the dialog-KB inconsistencies by predicting the contemporary KB snapshot for each train dialog.
- Score: 17.228046533234192
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Task-oriented dialog (TOD) agents often ground their responses on external
knowledge bases (KBs). These KBs can be dynamic and may be updated frequently.
Existing approaches for learning TOD agents assume the KB snapshot contemporary
to each individual dialog is available during training. However, in real-world
scenarios, only the latest KB snapshot is available during training and as a
result, the train dialogs may contain facts conflicting with the latest KB.
These dialog-KB inconsistencies in the training data may potentially confuse
the TOD agent learning algorithm.
In this work, we define the novel problem of learning a TOD agent with
dialog-KB inconsistencies in the training data. We propose a Dialog-KB
Arbitration Framework (DKAF) which reduces the dialog-KB inconsistencies by
predicting the contemporary KB snapshot for each train dialog. These predicted
KB snapshots are then used for training downstream TOD agents. As there are no
existing datasets with dialog-KB inconsistencies, we systematically introduce
inconsistencies in two publicly available dialog datasets. We show that TOD
agents trained with DKAF perform better than existing baselines on both these
datasets
Related papers
- Improving the Robustness of Knowledge-Grounded Dialogue via Contrastive
Learning [71.8876256714229]
We propose an entity-based contrastive learning framework for improving the robustness of knowledge-grounded dialogue systems.
Our method achieves new state-of-the-art performance in terms of automatic evaluation scores.
arXiv Detail & Related papers (2024-01-09T05:16:52Z) - CGoDial: A Large-Scale Benchmark for Chinese Goal-oriented Dialog
Evaluation [75.60156479374416]
CGoDial is a new challenging and comprehensive Chinese benchmark for Goal-oriented Dialog evaluation.
It contains 96,763 dialog sessions and 574,949 dialog turns totally, covering three datasets with different knowledge sources.
To bridge the gap between academic benchmarks and spoken dialog scenarios, we either collect data from real conversations or add spoken features to existing datasets via crowd-sourcing.
arXiv Detail & Related papers (2022-11-21T16:21:41Z) - SPACE-2: Tree-Structured Semi-Supervised Contrastive Pre-training for
Task-Oriented Dialog Understanding [68.94808536012371]
We propose a tree-structured pre-trained conversation model, which learns dialog representations from limited labeled dialogs and large-scale unlabeled dialog corpora.
Our method can achieve new state-of-the-art results on the DialoGLUE benchmark consisting of seven datasets and four popular dialog understanding tasks.
arXiv Detail & Related papers (2022-09-14T13:42:50Z) - GODEL: Large-Scale Pre-Training for Goal-Directed Dialog [119.1397031992088]
We introduce GODEL, a large pre-trained language model for dialog.
We show that GODEL outperforms state-of-the-art pre-trained dialog models in few-shot fine-tuning setups.
A novel feature of our evaluation methodology is the introduction of a notion of utility that assesses the usefulness of responses.
arXiv Detail & Related papers (2022-06-22T18:19:32Z) - In-Context Learning for Few-Shot Dialogue State Tracking [55.91832381893181]
We propose an in-context (IC) learning framework for few-shot dialogue state tracking (DST)
A large pre-trained language model (LM) takes a test instance and a few annotated examples as input, and directly decodes the dialogue states without any parameter updates.
This makes the LM more flexible and scalable compared to prior few-shot DST work when adapting to new domains and scenarios.
arXiv Detail & Related papers (2022-03-16T11:58:24Z) - Improving Dialogue Breakdown Detection with Semi-Supervised Learning [7.7914806980889875]
We investigate the use of semi-supervised learning methods to improve dialogue breakdown detection.
We demonstrate the effectiveness of these methods on the Dialogue Breakdown Detection Challenge (DBDC) English shared task.
arXiv Detail & Related papers (2020-10-30T23:04:56Z) - Learning Knowledge Bases with Parameters for Task-Oriented Dialogue
Systems [79.02430277138801]
The knowledge base (KB) plays an essential role in fulfilling user requests.
End-to-end systems use the KB directly as input, but they cannot scale when the KB is larger than a few hundred entries.
We propose a method to embed the KB, of any size, directly into the model parameters.
arXiv Detail & Related papers (2020-09-28T22:13:54Z) - Unsupervised Learning of KB Queries in Task-Oriented Dialogs [21.611723342957887]
Task-oriented dialog (TOD) systems often need to formulate knowledge base (KB) queries corresponding to the user intent.
Existing approaches require dialog datasets to explicitly annotate these KB queries.
We define the novel problems of predicting the KB query and training the dialog agent, without explicit KB query annotation.
arXiv Detail & Related papers (2020-04-30T22:10:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.