Chain of Thought Explanation for Dialogue State Tracking
- URL: http://arxiv.org/abs/2403.04656v2
- Date: Sat, 9 Mar 2024 15:37:36 GMT
- Title: Chain of Thought Explanation for Dialogue State Tracking
- Authors: Lin Xu, Ningxin Peng, Daquan Zhou, See-Kiong Ng, Jinlan Fu
- Abstract summary: Dialogue state tracking (DST) aims to record user queries and goals during a conversational interaction.
We propose a model named Chain-of-Thought-Explanation (CoTE) for the DST task.
CoTE is designed to create detailed explanations step by step after determining the slot values.
- Score: 52.015771676340016
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dialogue state tracking (DST) aims to record user queries and goals during a
conversational interaction achieved by maintaining a predefined set of slots
and their corresponding values. Current approaches decide slot values opaquely,
while humans usually adopt a more deliberate approach by collecting information
from relevant dialogue turns and then reasoning the appropriate values. In this
work, we focus on the steps needed to figure out slot values by proposing a
model named Chain-of-Thought-Explanation (CoTE) for the DST task. CoTE, which
is built on the generative DST framework, is designed to create detailed
explanations step by step after determining the slot values. This process leads
to more accurate and reliable slot values. More-over, to improve the reasoning
ability of the CoTE, we further construct more fluent and high-quality
explanations with automatic paraphrasing, leading the method CoTE-refined.
Experimental results on three widely recognized DST benchmarks-MultiWOZ 2.2,
WoZ 2.0, and M2M-demonstrate the remarkable effectiveness of the CoTE.
Furthermore, through a meticulous fine-grained analysis, we observe significant
benefits of our CoTE on samples characterized by longer dialogue turns, user
responses, and reasoning steps.
Related papers
- Ladder-of-Thought: Using Knowledge as Steps to Elevate Stance Detection [73.31406286956535]
We introduce the Ladder-of-Thought (LoT) for the stance detection task.
LoT directs the small LMs to assimilate high-quality external knowledge, refining the intermediate rationales produced.
Our empirical evaluations underscore LoT's efficacy, marking a 16% improvement over GPT-3.5 and a 10% enhancement compared to GPT-3.5 with CoT on stance detection task.
arXiv Detail & Related papers (2023-08-31T14:31:48Z) - Choice Fusion as Knowledge for Zero-Shot Dialogue State Tracking [5.691339955497443]
zero-shot dialogue state tracking (DST) tracks user's requirements in task-oriented dialogues without training on desired domains.
We propose CoFunDST, which is trained on domain-agnostic QA datasets and directly uses candidate choices of slot-values as knowledge for zero-shot dialogue-state generation.
Our proposed model achieves outperformed joint goal accuracy compared to existing zero-shot DST approaches in most domains on the MultiWOZ 2.1.
arXiv Detail & Related papers (2023-02-25T07:32:04Z) - Prompt Learning for Few-Shot Dialogue State Tracking [75.50701890035154]
This paper focuses on how to learn a dialogue state tracking (DST) model efficiently with limited labeled data.
We design a prompt learning framework for few-shot DST, which consists of two main components: value-based prompt and inverse prompt mechanism.
Experiments show that our model can generate unseen slots and outperforms existing state-of-the-art few-shot methods.
arXiv Detail & Related papers (2022-01-15T07:37:33Z) - Zero-Shot Dialogue State Tracking via Cross-Task Transfer [69.70718906395182]
We propose to transfer the textitcross-task knowledge from general question answering (QA) corpora for the zero-shot dialogue state tracking task.
Specifically, we propose TransferQA, a transferable generative QA model that seamlessly combines extractive QA and multi-choice QA.
In addition, we introduce two effective ways to construct unanswerable questions, namely, negative question sampling and context truncation.
arXiv Detail & Related papers (2021-09-10T03:57:56Z) - Improving Longer-range Dialogue State Tracking [22.606650177804966]
Dialogue state tracking (DST) is a pivotal component in task-oriented dialogue systems.
In this paper, we aim to improve the overall performance of DST with a special focus on handling longer dialogues.
arXiv Detail & Related papers (2021-02-27T02:44:28Z) - Improving Limited Labeled Dialogue State Tracking with Self-Supervision [91.68515201803986]
Existing dialogue state tracking (DST) models require plenty of labeled data.
We present and investigate two self-supervised objectives: preserving latent consistency and modeling conversational behavior.
Our proposed self-supervised signals can improve joint goal accuracy by 8.95% when only 1% labeled data is used.
arXiv Detail & Related papers (2020-10-26T21:57:42Z) - CoCo: Controllable Counterfactuals for Evaluating Dialogue State
Trackers [92.5628632009802]
We propose controllable counterfactuals (CoCo) to bridge the gap and evaluate dialogue state tracking (DST) models on novel scenarios.
CoCo generates novel conversation scenarios in two steps: (i) counterfactual goal generation at turn-level by dropping and adding slots followed by replacing slot values, and (ii) counterfactual conversation generation that is conditioned on (i) and consistent with the dialogue flow.
Human evaluations show that COCO-generated conversations perfectly reflect the underlying user goal with more than 95% accuracy and are as human-like as the original conversations.
arXiv Detail & Related papers (2020-10-24T09:39:35Z) - STN4DST: A Scalable Dialogue State Tracking based on Slot Tagging
Navigation [43.796556782050075]
We propose a novel scalable dialogue state tracking method based on slot tagging navigation.
The proposed model performs better than state-of-the-art baselines greatly.
arXiv Detail & Related papers (2020-10-21T08:09:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.