Attention Guided Dialogue State Tracking with Sparse Supervision
- URL: http://arxiv.org/abs/2101.11958v1
- Date: Thu, 28 Jan 2021 12:18:39 GMT
- Title: Attention Guided Dialogue State Tracking with Sparse Supervision
- Authors: Shuailong Liang, Lahari Poddar, Gyuri Szarvas
- Abstract summary: In call centers, for tasks like managing bookings or subscriptions, the user goal can be associated with actions issued by customer service agents.
These action logs are available in large volumes and can be utilized for learning dialogue states.
We extend a state-of-the-art encoder-decoder model to efficiently learn Dialogue State Tracking (DST) with sparse labels.
- Score: 5.758073912084366
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing approaches to Dialogue State Tracking (DST) rely on turn level
dialogue state annotations, which are expensive to acquire in large scale. In
call centers, for tasks like managing bookings or subscriptions, the user goal
can be associated with actions (e.g.~API calls) issued by customer service
agents. These action logs are available in large volumes and can be utilized
for learning dialogue states. However, unlike turn-level annotations, such
logged actions are only available sparsely across the dialogue, providing only
a form of weak supervision for DST models. To efficiently learn DST with sparse
labels, we extend a state-of-the-art encoder-decoder model. The model learns a
slot-aware representation of dialogue history, which focuses on relevant turns
to guide the decoder. We present results on two public multi-domain DST
datasets (MultiWOZ and Schema Guided Dialogue) in both settings i.e. training
with turn-level and with sparse supervision. The proposed approach improves
over baseline in both settings. More importantly, our model trained with sparse
supervision is competitive in performance to fully supervised baselines, while
being more data and cost efficient.
Related papers
- Unsupervised End-to-End Task-Oriented Dialogue with LLMs: The Power of the Noisy Channel [9.082443585886127]
Training task-oriented dialogue systems typically require turn-level annotations for interacting with their APIs.
Unlabeled data and a schema definition are sufficient for building a working task-oriented dialogue system, completely unsupervised.
We propose an innovative approach using expectation-maximization (EM) that infers turn-level annotations as latent variables.
arXiv Detail & Related papers (2024-04-23T16:51:26Z) - SuperDialseg: A Large-scale Dataset for Supervised Dialogue Segmentation [55.82577086422923]
We provide a feasible definition of dialogue segmentation points with the help of document-grounded dialogues.
We release a large-scale supervised dataset called SuperDialseg, containing 9,478 dialogues.
We also provide a benchmark including 18 models across five categories for the dialogue segmentation task.
arXiv Detail & Related papers (2023-05-15T06:08:01Z) - In-Context Learning for Few-Shot Dialogue State Tracking [55.91832381893181]
We propose an in-context (IC) learning framework for few-shot dialogue state tracking (DST)
A large pre-trained language model (LM) takes a test instance and a few annotated examples as input, and directly decodes the dialogue states without any parameter updates.
This makes the LM more flexible and scalable compared to prior few-shot DST work when adapting to new domains and scenarios.
arXiv Detail & Related papers (2022-03-16T11:58:24Z) - Structure Extraction in Task-Oriented Dialogues with Slot Clustering [94.27806592467537]
In task-oriented dialogues, dialogue structure has often been considered as transition graphs among dialogue states.
We propose a simple yet effective approach for structure extraction in task-oriented dialogues.
arXiv Detail & Related papers (2022-02-28T20:18:12Z) - Prompt Learning for Few-Shot Dialogue State Tracking [75.50701890035154]
This paper focuses on how to learn a dialogue state tracking (DST) model efficiently with limited labeled data.
We design a prompt learning framework for few-shot DST, which consists of two main components: value-based prompt and inverse prompt mechanism.
Experiments show that our model can generate unseen slots and outperforms existing state-of-the-art few-shot methods.
arXiv Detail & Related papers (2022-01-15T07:37:33Z) - SGD-QA: Fast Schema-Guided Dialogue State Tracking for Unseen Services [15.21976869687864]
We propose SGD-QA, a model for schema-guided dialogue state tracking based on a question answering approach.
The proposed multi-pass model shares a single encoder between the domain information and dialogue utterance.
The model improves performance on unseen services by at least 1.6x compared to single-pass baseline models.
arXiv Detail & Related papers (2021-05-17T17:54:32Z) - Improving Limited Labeled Dialogue State Tracking with Self-Supervision [91.68515201803986]
Existing dialogue state tracking (DST) models require plenty of labeled data.
We present and investigate two self-supervised objectives: preserving latent consistency and modeling conversational behavior.
Our proposed self-supervised signals can improve joint goal accuracy by 8.95% when only 1% labeled data is used.
arXiv Detail & Related papers (2020-10-26T21:57:42Z) - Non-Autoregressive Dialog State Tracking [122.2328875457225]
We propose a novel framework of Non-Autoregressive Dialog State Tracking (NADST)
NADST can factor in potential dependencies among domains and slots to optimize the models towards better prediction of dialogue states as a complete set rather than separate slots.
Our results show that our model achieves the state-of-the-art joint accuracy across all domains on the MultiWOZ 2.1 corpus.
arXiv Detail & Related papers (2020-02-19T06:39:26Z) - Goal-Oriented Multi-Task BERT-Based Dialogue State Tracker [0.1864131501304829]
State Tracking (DST) is a core component of virtual assistants such as Alexa or Siri.
In this work, we propose a GOaL-Oriented Multi-task BERT-based dialogue state tracker (GOLOMB)
arXiv Detail & Related papers (2020-02-05T22:56:12Z) - Domain-Aware Dialogue State Tracker for Multi-Domain Dialogue Systems [2.3859169601259347]
In task-oriented dialogue systems the dialogue state tracker (DST) component is responsible for predicting the state of the dialogue based on the dialogue history.
We propose a domain-aware dialogue state tracker that is completely data-driven and it is modeled to predict for dynamic service schemas.
arXiv Detail & Related papers (2020-01-21T13:41:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.