In-Context Learning for Few-Shot Dialogue State Tracking
- URL: http://arxiv.org/abs/2203.08568v1
- Date: Wed, 16 Mar 2022 11:58:24 GMT
- Title: In-Context Learning for Few-Shot Dialogue State Tracking
- Authors: Yushi Hu, Chia-Hsuan Lee, Tianbao Xie, Tao Yu, Noah A. Smith, Mari
Ostendorf
- Abstract summary: We propose an in-context (IC) learning framework for few-shot dialogue state tracking (DST)
A large pre-trained language model (LM) takes a test instance and a few annotated examples as input, and directly decodes the dialogue states without any parameter updates.
This makes the LM more flexible and scalable compared to prior few-shot DST work when adapting to new domains and scenarios.
- Score: 55.91832381893181
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Collecting and annotating task-oriented dialogues is time-consuming and
costly. Thus, few-shot learning for dialogue tasks presents an exciting
opportunity. In this work, we propose an in-context (IC) learning framework for
few-shot dialogue state tracking (DST), where a large pre-trained language
model (LM) takes a test instance and a few annotated examples as input, and
directly decodes the dialogue states without any parameter updates. This makes
the LM more flexible and scalable compared to prior few-shot DST work when
adapting to new domains and scenarios. We study ways to formulate dialogue
context into prompts for LMs and propose an efficient approach to retrieve
dialogues as exemplars given a test instance and a selection pool of few-shot
examples. To better leverage the pre-trained LMs, we also reformulate DST into
a text-to-SQL problem. Empirical results on MultiWOZ 2.1 and 2.4 show that our
method IC-DST outperforms previous fine-tuned state-of-the-art models in
few-shot settings.
Related papers
- Diverse Retrieval-Augmented In-Context Learning for Dialogue State
Tracking [3.8073142980733]
We propose RefPyDST, which advances the state of the art with three advancements to in-context learning for dialogue state tracking.
First, we formulate DST as a Python programming task, explicitly modeling language coreference as variable reference in Python.
Second, since in-context learning depends highly on the context examples, we propose a method to retrieve a diverse set of relevant examples to improve performance.
arXiv Detail & Related papers (2023-07-04T03:15:52Z) - Cue-CoT: Chain-of-thought Prompting for Responding to In-depth Dialogue
Questions with LLMs [59.74002011562726]
We propose a novel linguistic cue-based chain-of-thoughts (textitCue-CoT) to provide a more personalized and engaging response.
We build a benchmark with in-depth dialogue questions, consisting of 6 datasets in both Chinese and English.
Empirical results demonstrate our proposed textitCue-CoT method outperforms standard prompting methods in terms of both textithelpfulness and textitacceptability on all datasets.
arXiv Detail & Related papers (2023-05-19T16:27:43Z) - Stabilized In-Context Learning with Pre-trained Language Models for Few
Shot Dialogue State Tracking [57.92608483099916]
Large pre-trained language models (PLMs) have shown impressive unaided performance across many NLP tasks.
For more complex tasks such as dialogue state tracking (DST), designing prompts that reliably convey the desired intent is nontrivial.
We introduce a saliency model to limit dialogue text length, allowing us to include more exemplars per query.
arXiv Detail & Related papers (2023-02-12T15:05:10Z) - DIONYSUS: A Pre-trained Model for Low-Resource Dialogue Summarization [127.714919036388]
DIONYSUS is a pre-trained encoder-decoder model for summarizing dialogues in any new domain.
Our experiments show that DIONYSUS outperforms existing methods on six datasets.
arXiv Detail & Related papers (2022-12-20T06:21:21Z) - Controllable Dialogue Simulation with In-Context Learning [39.04491297557292]
textscDialogic is a dialogue simulation method based on large language model in-context learning.
Our method can rapidly expand a small set of dialogue data with minimum or zero human involvement.
Our simulated dialogues have near-human fluency and annotation accuracy.
arXiv Detail & Related papers (2022-10-09T06:32:58Z) - SPACE-2: Tree-Structured Semi-Supervised Contrastive Pre-training for
Task-Oriented Dialog Understanding [68.94808536012371]
We propose a tree-structured pre-trained conversation model, which learns dialog representations from limited labeled dialogs and large-scale unlabeled dialog corpora.
Our method can achieve new state-of-the-art results on the DialoGLUE benchmark consisting of seven datasets and four popular dialog understanding tasks.
arXiv Detail & Related papers (2022-09-14T13:42:50Z) - OPAL: Ontology-Aware Pretrained Language Model for End-to-End
Task-Oriented Dialogue [40.62090743056549]
This paper presents an ontology-aware pretrained language model (OPAL) for end-to-end task-oriented dialogue (TOD)
Unlike chit-chat dialogue models, task-oriented dialogue models fulfill at least two task-specific modules: dialogue state tracker (DST) and response generator (RG)
arXiv Detail & Related papers (2022-09-10T04:38:27Z) - GODEL: Large-Scale Pre-Training for Goal-Directed Dialog [119.1397031992088]
We introduce GODEL, a large pre-trained language model for dialog.
We show that GODEL outperforms state-of-the-art pre-trained dialog models in few-shot fine-tuning setups.
A novel feature of our evaluation methodology is the introduction of a notion of utility that assesses the usefulness of responses.
arXiv Detail & Related papers (2022-06-22T18:19:32Z) - Dialogue Summaries as Dialogue States (DS2), Template-Guided
Summarization for Few-shot Dialogue State Tracking [16.07100713414678]
Few-shot dialogue state tracking (DST) is a realistic solution to this problem.
We propose to reformulate dialogue state tracking as a dialogue summarization problem.
arXiv Detail & Related papers (2022-03-03T07:54:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.