Knowledge-grounded Dialog State Tracking
- URL: http://arxiv.org/abs/2210.06656v1
- Date: Thu, 13 Oct 2022 01:34:08 GMT
- Title: Knowledge-grounded Dialog State Tracking
- Authors: Dian Yu, Mingqiu Wang, Yuan Cao, Izhak Shafran, Laurent El Shafey,
Hagen Soltau
- Abstract summary: We propose to perform dialog state tracking grounded on knowledge encoded externally.
We query relevant knowledge of various forms based on the dialog context.
We demonstrate superior performance of our proposed method over strong baselines.
- Score: 12.585986197627477
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge (including structured knowledge such as schema and ontology, and
unstructured knowledge such as web corpus) is a critical part of dialog
understanding, especially for unseen tasks and domains. Traditionally, such
domain-specific knowledge is encoded implicitly into model parameters for the
execution of downstream tasks, which makes training inefficient. In addition,
such models are not easily transferable to new tasks with different schemas. In
this work, we propose to perform dialog state tracking grounded on knowledge
encoded externally. We query relevant knowledge of various forms based on the
dialog context where such information can ground the prediction of dialog
states. We demonstrate superior performance of our proposed method over strong
baselines, especially in the few-shot learning setting.
Related papers
- Bridging Information Gaps in Dialogues With Grounded Exchanges Using Knowledge Graphs [4.449835214520727]
We study the potential of large language models for conversational grounding.
Our approach involves annotating human conversations across five knowledge domains to create a new dialogue corpus called BridgeKG.
Our findings offer insights into how these models use in-context learning for conversational grounding tasks and common prediction errors.
arXiv Detail & Related papers (2024-08-02T08:07:15Z) - UNTER: A Unified Knowledge Interface for Enhancing Pre-trained Language
Models [100.4659557650775]
We propose a UNified knowledge inTERface, UNTER, to provide a unified perspective to exploit both structured knowledge and unstructured knowledge.
With both forms of knowledge injected, UNTER gains continuous improvements on a series of knowledge-driven NLP tasks.
arXiv Detail & Related papers (2023-05-02T17:33:28Z) - PK-Chat: Pointer Network Guided Knowledge Driven Generative Dialogue
Model [79.64376762489164]
PK-Chat is a Pointer network guided generative dialogue model, incorporating a unified pretrained language model and a pointer network over knowledge graphs.
The words generated by PK-Chat in the dialogue are derived from the prediction of word lists and the direct prediction of the external knowledge graph knowledge.
Based on the PK-Chat, a dialogue system is built for academic scenarios in the case of geosciences.
arXiv Detail & Related papers (2023-04-02T18:23:13Z) - Position Matters! Empirical Study of Order Effect in Knowledge-grounded
Dialogue [54.98184262897166]
We investigate how the order of the knowledge set can influence autoregressive dialogue systems' responses.
We propose a simple and novel technique to alleviate the order effect by modifying the position embeddings of knowledge input.
arXiv Detail & Related papers (2023-02-12T10:13:00Z) - DiSTRICT: Dialogue State Tracking with Retriever Driven In-Context
Tuning [7.5700317050237365]
We propose DiSTRICT, a generalizable in-context tuning approach for Dialogue State Tracking (DST)
DSTRICT retrieves highly relevant training examples for a given dialogue to fine-tune the model without any hand-crafted templates.
Experiments with the MultiWOZ benchmark datasets show that DiSTRICT outperforms existing approaches in various zero-shot and few-shot settings.
arXiv Detail & Related papers (2022-12-06T09:40:15Z) - CGoDial: A Large-Scale Benchmark for Chinese Goal-oriented Dialog
Evaluation [75.60156479374416]
CGoDial is a new challenging and comprehensive Chinese benchmark for Goal-oriented Dialog evaluation.
It contains 96,763 dialog sessions and 574,949 dialog turns totally, covering three datasets with different knowledge sources.
To bridge the gap between academic benchmarks and spoken dialog scenarios, we either collect data from real conversations or add spoken features to existing datasets via crowd-sourcing.
arXiv Detail & Related papers (2022-11-21T16:21:41Z) - Lexical Knowledge Internalization for Neural Dialog Generation [36.27946635687281]
We propose knowledge internalization (KI), which aims to complement the lexical knowledge into neural dialog models.
To tackle the challenge due to the large scale of lexical knowledge, we adopt the contrastive learning approach and create an effective token-level lexical knowledge retriever.
arXiv Detail & Related papers (2022-05-04T08:23:44Z) - DialoKG: Knowledge-Structure Aware Task-Oriented Dialogue Generation [9.186215038100904]
We propose DialoKG, a novel task-oriented dialogue system that effectively incorporates knowledge into a language model.
Our proposed system views relational knowledge as a knowledge graph and introduces a structure-aware knowledge embedding technique.
An empirical evaluation demonstrates the effectiveness of DialoKG over state-of-the-art methods on several standard benchmark datasets.
arXiv Detail & Related papers (2022-04-19T22:26:18Z) - Knowledge-Grounded Dialogue Generation with a Unified Knowledge
Representation [78.85622982191522]
Existing systems perform poorly on unseen topics due to limited topics covered in the training data.
We present PLUG, a language model that homogenizes different knowledge sources to a unified knowledge representation.
It can achieve comparable performance with state-of-the-art methods under a fully-supervised setting.
arXiv Detail & Related papers (2021-12-15T07:11:02Z) - A Three-Stage Learning Framework for Low-Resource Knowledge-Grounded
Dialogue Generation [0.9926500244448218]
We propose a novel three-stage learning framework based on weakly supervised learning which benefits from large scale ungrounded dialogues and unstructured knowledge base.
Our approach can outperform other state-of-the-art methods with less training data, and even in zero-resource scenario, our approach still performs well.
arXiv Detail & Related papers (2021-09-09T08:32:02Z) - Low-Resource Knowledge-Grounded Dialogue Generation [74.09352261943913]
We consider knowledge-grounded dialogue generation under a natural assumption that only limited training examples are available.
We devise a disentangled response decoder in order to isolate parameters that depend on knowledge-grounded dialogues from the entire generation model.
With only 1/8 training data, our model can achieve the state-of-the-art performance and generalize well on out-of-domain knowledge.
arXiv Detail & Related papers (2020-02-24T16:20:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.