A Study on Prompt-based Few-Shot Learning Methods for Belief State
Tracking in Task-oriented Dialog Systems
- URL: http://arxiv.org/abs/2204.08167v1
- Date: Mon, 18 Apr 2022 05:29:54 GMT
- Title: A Study on Prompt-based Few-Shot Learning Methods for Belief State
Tracking in Task-oriented Dialog Systems
- Authors: Debjoy Saha, Bishal Santra, Pawan Goyal
- Abstract summary: We tackle the Dialogue Belief State Tracking problem of task-oriented conversational systems.
Recent approaches to this problem leveraging Transformer-based models have yielded great results.
We explore prompt-based few-shot learning for Dialogue Belief State Tracking.
- Score: 10.024834304960846
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We tackle the Dialogue Belief State Tracking(DST) problem of task-oriented
conversational systems. Recent approaches to this problem leveraging
Transformer-based models have yielded great results. However, training these
models is expensive, both in terms of computational resources and time.
Additionally, collecting high quality annotated dialogue datasets remains a
challenge for researchers because of the extensive annotation required for
training these models. Driven by the recent success of pre-trained language
models and prompt-based learning, we explore prompt-based few-shot learning for
Dialogue Belief State Tracking. We formulate the DST problem as a 2-stage
prompt-based language modelling task and train language models for both tasks
and present a comprehensive empirical analysis of their separate and joint
performance. We demonstrate the potential of prompt-based methods in few-shot
learning for DST and provide directions for future improvement.
Related papers
- Injecting linguistic knowledge into BERT for Dialogue State Tracking [60.42231674887294]
This paper proposes a method that extracts linguistic knowledge via an unsupervised framework.
We then utilize this knowledge to augment BERT's performance and interpretability in Dialogue State Tracking (DST) tasks.
We benchmark this framework on various DST tasks and observe a notable improvement in accuracy.
arXiv Detail & Related papers (2023-11-27T08:38:42Z) - Stabilized In-Context Learning with Pre-trained Language Models for Few
Shot Dialogue State Tracking [57.92608483099916]
Large pre-trained language models (PLMs) have shown impressive unaided performance across many NLP tasks.
For more complex tasks such as dialogue state tracking (DST), designing prompts that reliably convey the desired intent is nontrivial.
We introduce a saliency model to limit dialogue text length, allowing us to include more exemplars per query.
arXiv Detail & Related papers (2023-02-12T15:05:10Z) - Few-shot Prompting Towards Controllable Response Generation [49.479958672988566]
We first explored the combination of prompting and reinforcement learning (RL) to steer models' generation without accessing any of the models' parameters.
We apply multi-task learning to make the model learn to generalize to new tasks better.
Experiment results show that our proposed method can successfully control several state-of-the-art (SOTA) dialogue models without accessing their parameters.
arXiv Detail & Related papers (2022-06-08T14:48:06Z) - In-Context Learning for Few-Shot Dialogue State Tracking [55.91832381893181]
We propose an in-context (IC) learning framework for few-shot dialogue state tracking (DST)
A large pre-trained language model (LM) takes a test instance and a few annotated examples as input, and directly decodes the dialogue states without any parameter updates.
This makes the LM more flexible and scalable compared to prior few-shot DST work when adapting to new domains and scenarios.
arXiv Detail & Related papers (2022-03-16T11:58:24Z) - Prompt Learning for Few-Shot Dialogue State Tracking [75.50701890035154]
This paper focuses on how to learn a dialogue state tracking (DST) model efficiently with limited labeled data.
We design a prompt learning framework for few-shot DST, which consists of two main components: value-based prompt and inverse prompt mechanism.
Experiments show that our model can generate unseen slots and outperforms existing state-of-the-art few-shot methods.
arXiv Detail & Related papers (2022-01-15T07:37:33Z) - Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System [26.837972034630003]
PPTOD is a unified plug-and-play model for task-oriented dialogue.
We extensively test our model on three benchmark TOD tasks, including end-to-end dialogue modelling, dialogue state tracking, and intent classification.
arXiv Detail & Related papers (2021-09-29T22:02:18Z) - An Empirical Study of Cross-Lingual Transferability in Generative
Dialogue State Tracker [33.2309643963072]
We study the transferability of a cross-lingual generative dialogue state tracking system using a multilingual pre-trained seq2seq model.
We also find out the low cross-lingual transferability of our approaches and provides investigation and discussion.
arXiv Detail & Related papers (2021-01-27T12:45:55Z) - Language Models as Few-Shot Learner for Task-Oriented Dialogue Systems [74.8759568242933]
Task-oriented dialogue systems use four connected modules, namely, Natural Language Understanding (NLU), a Dialogue State Tracking (DST), Dialogue Policy (DP) and Natural Language Generation (NLG)
A research challenge is to learn each module with the least amount of samples given the high cost related to the data collection.
We evaluate the priming few-shot ability of language models in the NLU, DP and NLG tasks.
arXiv Detail & Related papers (2020-08-14T08:23:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.