Template-based Approach to Zero-shot Intent Recognition
- URL: http://arxiv.org/abs/2206.10914v1
- Date: Wed, 22 Jun 2022 08:44:59 GMT
- Title: Template-based Approach to Zero-shot Intent Recognition
- Authors: Dmitry Lamanov and Pavel Burnyshev and Ekaterina Artemova and Valentin
Malykh and Andrey Bout and Irina Piontkovskaya
- Abstract summary: In this paper, we explore the generalized zero-shot setup for intent recognition.
Following best practices for zero-shot text classification, we treat the task with a sentence pair modeling approach.
We outperform previous state-of-the-art f1-measure by up to 16% for unseen intents.
- Score: 7.330908962006392
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The recent advances in transfer learning techniques and pre-training of large
contextualized encoders foster innovation in real-life applications, including
dialog assistants. Practical needs of intent recognition require effective data
usage and the ability to constantly update supported intents, adopting new
ones, and abandoning outdated ones. In particular, the generalized zero-shot
paradigm, in which the model is trained on the seen intents and tested on both
seen and unseen intents, is taking on new importance. In this paper, we explore
the generalized zero-shot setup for intent recognition. Following best
practices for zero-shot text classification, we treat the task with a sentence
pair modeling approach. We outperform previous state-of-the-art f1-measure by
up to 16\% for unseen intents, using intent labels and user utterances and
without accessing external sources (such as knowledge bases). Further
enhancement includes lexicalization of intent labels, which improves
performance by up to 7\%. By using task transferring from other sentence pair
tasks, such as Natural Language Inference, we gain additional improvements.
Related papers
- Continual Learning Improves Zero-Shot Action Recognition [12.719578035745744]
We propose a novel method based on continual learning to address zero-shot action recognition.
The memory is used to train a classification model, ensuring a balanced exposure to both old and new classes.
Experiments demonstrate that em GIL improves generalization in unseen classes, achieving a new state-of-the-art in zero-shot recognition.
arXiv Detail & Related papers (2024-10-14T13:42:44Z) - Enhancing Visual Continual Learning with Language-Guided Supervision [76.38481740848434]
Continual learning aims to empower models to learn new tasks without forgetting previously acquired knowledge.
We argue that the scarce semantic information conveyed by the one-hot labels hampers the effective knowledge transfer across tasks.
Specifically, we use PLMs to generate semantic targets for each class, which are frozen and serve as supervision signals.
arXiv Detail & Related papers (2024-03-24T12:41:58Z) - Less is More: A Closer Look at Semantic-based Few-Shot Learning [11.724194320966959]
Few-shot Learning aims to learn and distinguish new categories with a very limited number of available images.
We propose a simple but effective framework for few-shot learning tasks, specifically designed to exploit the textual information and language model.
Our experiments conducted across four widely used few-shot datasets demonstrate that our simple framework achieves impressive results.
arXiv Detail & Related papers (2024-01-10T08:56:02Z) - Continuously Learning New Words in Automatic Speech Recognition [56.972851337263755]
We propose an self-supervised continual learning approach to recognize new words.
We use a memory-enhanced Automatic Speech Recognition model from previous work.
We show that with this approach, we obtain increasing performance on the new words when they occur more frequently.
arXiv Detail & Related papers (2024-01-09T10:39:17Z) - POUF: Prompt-oriented unsupervised fine-tuning for large pre-trained
models [62.23255433487586]
We propose an unsupervised fine-tuning framework to fine-tune the model or prompt on the unlabeled target data.
We demonstrate how to apply our method to both language-augmented vision and masked-language models by aligning the discrete distributions extracted from the prompts and target data.
arXiv Detail & Related papers (2023-04-29T22:05:22Z) - Selective In-Context Data Augmentation for Intent Detection using
Pointwise V-Information [100.03188187735624]
We introduce a novel approach based on PLMs and pointwise V-information (PVI), a metric that can measure the usefulness of a datapoint for training a model.
Our method first fine-tunes a PLM on a small seed of training data and then synthesizes new datapoints - utterances that correspond to given intents.
Our method is thus able to leverage the expressive power of large language models to produce diverse training data.
arXiv Detail & Related papers (2023-02-10T07:37:49Z) - New Intent Discovery with Pre-training and Contrastive Learning [21.25371293641141]
New intent discovery aims to uncover novel intent categories from user utterances to expand the set of supported intent classes.
Existing approaches typically rely on a large amount of labeled utterances.
We propose a new contrastive loss to exploit self-supervisory signals in unlabeled data for clustering.
arXiv Detail & Related papers (2022-05-25T17:07:25Z) - Learning to Prompt for Vision-Language Models [82.25005817904027]
Vision-language pre-training has emerged as a promising alternative for representation learning.
It shifts from the tradition of using images and discrete labels for learning a fixed set of weights, seen as visual concepts, to aligning images and raw text for two separate encoders.
Such a paradigm benefits from a broader source of supervision and allows zero-shot transfer to downstream tasks.
arXiv Detail & Related papers (2021-09-02T17:57:31Z) - Continuous representations of intents for dialogue systems [10.031004070657122]
Up until recently the focus has been on detecting a fixed, discrete, number of seen intents.
Recent years have seen some work done on unseen intent detection in the context of zero-shot learning.
This paper proposes a novel model where intents are continuous points placed in a specialist Intent Space.
arXiv Detail & Related papers (2021-05-08T15:08:20Z) - Self-training Improves Pre-training for Natural Language Understanding [63.78927366363178]
We study self-training as another way to leverage unlabeled data through semi-supervised learning.
We introduce SentAugment, a data augmentation method which computes task-specific query embeddings from labeled data.
Our approach leads to scalable and effective self-training with improvements of up to 2.6% on standard text classification benchmarks.
arXiv Detail & Related papers (2020-10-05T17:52:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.