Zero-Shot Learning for Joint Intent and Slot Labeling
- URL: http://arxiv.org/abs/2212.07922v1
- Date: Tue, 29 Nov 2022 01:58:25 GMT
- Title: Zero-Shot Learning for Joint Intent and Slot Labeling
- Authors: Rashmi Gangadharaiah and Balakrishnan Narayanaswamy
- Abstract summary: We show that one can profitably perform joint zero-shot intent classification and slot labeling.
We describe NN architectures that translate between word and sentence embedding spaces.
- Score: 11.82805641934772
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is expensive and difficult to obtain the large number of sentence-level
intent and token-level slot label annotations required to train neural network
(NN)-based Natural Language Understanding (NLU) components of task-oriented
dialog systems, especially for the many real world tasks that have a large and
growing number of intents and slot types. While zero shot learning approaches
that require no labeled examples -- only features and auxiliary information --
have been proposed only for slot labeling, we show that one can profitably
perform joint zero-shot intent classification and slot labeling. We demonstrate
the value of capturing dependencies between intents and slots, and between
different slots in an utterance in the zero shot setting. We describe NN
architectures that translate between word and sentence embedding spaces, and
demonstrate that these modifications are required to enable zero shot learning
for this task. We show a substantial improvement over strong baselines and
explain the intuition behind each architectural modification through
visualizations and ablation studies.
Related papers
- Generalized Label-Efficient 3D Scene Parsing via Hierarchical Feature
Aligned Pre-Training and Region-Aware Fine-tuning [55.517000360348725]
This work presents a framework for dealing with 3D scene understanding when the labeled scenes are quite limited.
To extract knowledge for novel categories from the pre-trained vision-language models, we propose a hierarchical feature-aligned pre-training and knowledge distillation strategy.
Experiments with both indoor and outdoor scenes demonstrated the effectiveness of our approach in both data-efficient learning and open-world few-shot learning.
arXiv Detail & Related papers (2023-12-01T15:47:04Z) - HierarchicalContrast: A Coarse-to-Fine Contrastive Learning Framework
for Cross-Domain Zero-Shot Slot Filling [4.1940152307593515]
Cross-domain zero-shot slot filling plays a vital role in leveraging source domain knowledge to learn a model.
Existing state-of-the-art zero-shot slot filling methods have limited generalization ability in target domain.
We present a novel Hierarchical Contrastive Learning Framework (HiCL) for zero-shot slot filling.
arXiv Detail & Related papers (2023-10-13T14:23:33Z) - Slot Induction via Pre-trained Language Model Probing and Multi-level
Contrastive Learning [62.839109775887025]
Slot Induction (SI) task whose objective is to induce slot boundaries without explicit knowledge of token-level slot annotations.
We propose leveraging Unsupervised Pre-trained Language Model (PLM) Probing and Contrastive Learning mechanism to exploit unsupervised semantic knowledge extracted from PLM.
Our approach is shown to be effective in SI task and capable of bridging the gaps with token-level supervised models on two NLU benchmark datasets.
arXiv Detail & Related papers (2023-08-09T05:08:57Z) - Recognizing Unseen Objects via Multimodal Intensive Knowledge Graph
Propagation [68.13453771001522]
We propose a multimodal intensive ZSL framework that matches regions of images with corresponding semantic embeddings.
We conduct extensive experiments and evaluate our model on large-scale real-world data.
arXiv Detail & Related papers (2023-06-14T13:07:48Z) - Vocabulary-informed Zero-shot and Open-set Learning [128.83517181045815]
We propose vocabulary-informed learning to address problems of supervised, zero-shot, generalized zero-shot and open set recognition.
Specifically, we propose a weighted maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms.
We illustrate that resulting model shows improvements in supervised, zero-shot, generalized zero-shot, and large open set recognition, with up to 310K class vocabulary on Animal with Attributes and ImageNet datasets.
arXiv Detail & Related papers (2023-01-03T08:19:22Z) - Cross-modal Representation Learning for Zero-shot Action Recognition [67.57406812235767]
We present a cross-modal Transformer-based framework, which jointly encodes video data and text labels for zero-shot action recognition (ZSAR)
Our model employs a conceptually new pipeline by which visual representations are learned in conjunction with visual-semantic associations in an end-to-end manner.
Experiment results show our model considerably improves upon the state of the arts in ZSAR, reaching encouraging top-1 accuracy on UCF101, HMDB51, and ActivityNet benchmark datasets.
arXiv Detail & Related papers (2022-05-03T17:39:27Z) - An Explicit-Joint and Supervised-Contrastive Learning Framework for
Few-Shot Intent Classification and Slot Filling [12.85364483952161]
Intent classification (IC) and slot filling (SF) are critical building blocks in task-oriented dialogue systems.
Few IC/SF models perform well when the number of training samples per class is quite small.
We propose a novel explicit-joint and supervised-contrastive learning framework for few-shot intent classification and slot filling.
arXiv Detail & Related papers (2021-10-26T13:28:28Z) - Linguistically-Enriched and Context-Aware Zero-shot Slot Filling [6.06746295810681]
Slot filling is one of the most important challenges in modern task-oriented dialog systems.
New domains (i.e., unseen in training) may emerge after deployment.
It is imperative that models seamlessly adapt and fill slots from both seen and unseen domains.
arXiv Detail & Related papers (2021-01-16T20:18:16Z) - Learning Disentangled Intent Representations for Zero-shot Intent
Detection [13.19024497857648]
We propose a class-transductive framework that utilizes unseen class labels to learn Disentangled Representations (DIR)
Under this framework, we introduce a multi-task learning objective, which encourages the model to learn the distinctions among intents.
Experiments on two real-world datasets show that the proposed framework brings consistent improvement to the baseline systems.
arXiv Detail & Related papers (2020-12-03T06:41:09Z) - AGIF: An Adaptive Graph-Interactive Framework for Joint Multiple Intent
Detection and Slot Filling [69.59096090788125]
In this paper, we propose an Adaptive Graph-Interactive Framework (AGIF) for joint multiple intent detection and slot filling.
We introduce an intent-slot graph interaction layer to model the strong correlation between the slot and intents.
Such an interaction layer is applied to each token adaptively, which has the advantage to automatically extract the relevant intents information.
arXiv Detail & Related papers (2020-04-21T15:07:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.