Fuzzy Classification of Multi-intent Utterances
- URL: http://arxiv.org/abs/2104.10830v1
- Date: Thu, 22 Apr 2021 02:15:56 GMT
- Title: Fuzzy Classification of Multi-intent Utterances
- Authors: Geetanjali Bihani and Julia Taylor Rayz
- Abstract summary: Current intent classification approaches assign binary intent class memberships to natural language utterances.
We propose a scheme to address the ambiguity in single-intent as well as multi-intent natural language utterances.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current intent classification approaches assign binary intent class
memberships to natural language utterances while disregarding the inherent
vagueness in language and the corresponding vagueness in intent class
boundaries. In this work, we propose a scheme to address the ambiguity in
single-intent as well as multi-intent natural language utterances by creating
degree memberships over fuzzified intent classes. To our knowledge, this is the
first work to address and quantify the impact of the fuzzy nature of natural
language utterances over intent category memberships. Additionally, our
approach overcomes the sparsity of multi-intent utterance data to train
classification models by using a small database of single intent utterances to
generate class memberships over multi-intent utterances. We evaluate our
approach over two task-oriented dialog datasets, across different fuzzy
membership generation techniques and approximate string similarity measures.
Our results reveal the impact of lexical overlap between utterances of
different intents, and the underlying data distributions, on the fuzzification
of intent memberships. Moreover, we evaluate the accuracy of our approach by
comparing the defuzzified memberships to their binary counterparts, across
different combinations of membership functions and string similarity measures.
Related papers
- A Simple Meta-learning Paradigm for Zero-shot Intent Classification with
Mixture Attention Mechanism [17.228616743739412]
We propose a simple yet effective meta-learning paradigm for zero-shot intent classification.
To learn better semantic representations for utterances, we introduce a new mixture attention mechanism.
To strengthen the transfer ability of the model from seen classes to unseen classes, we reformulate zero-shot intent classification with a meta-learning strategy.
arXiv Detail & Related papers (2022-06-05T13:37:51Z) - A Multi-level Supervised Contrastive Learning Framework for Low-Resource
Natural Language Inference [54.678516076366506]
Natural Language Inference (NLI) is a growingly essential task in natural language understanding.
Here we propose a multi-level supervised contrastive learning framework named MultiSCL for low-resource natural language inference.
arXiv Detail & Related papers (2022-05-31T05:54:18Z) - Leveraging Acoustic and Linguistic Embeddings from Pretrained speech and
language Models for Intent Classification [81.80311855996584]
We propose a novel intent classification framework that employs acoustic features extracted from a pretrained speech recognition system and linguistic features learned from a pretrained language model.
We achieve 90.86% and 99.07% accuracy on ATIS and Fluent speech corpus, respectively.
arXiv Detail & Related papers (2021-02-15T07:20:06Z) - Generalized Zero-shot Intent Detection via Commonsense Knowledge [5.398580049917152]
We propose RIDE: an intent detection model that leverages commonsense knowledge in an unsupervised fashion to overcome the issue of training data scarcity.
RIDE computes robust and generalizable relationship meta-features that capture deep semantic relationships between utterances and intent labels.
Our extensive experimental analysis on three widely-used intent detection benchmarks shows that relationship meta-features significantly increase the accuracy of detecting both seen and unseen intents.
arXiv Detail & Related papers (2021-02-04T23:36:41Z) - General-Purpose Speech Representation Learning through a Self-Supervised
Multi-Granularity Framework [114.63823178097402]
This paper presents a self-supervised learning framework, named MGF, for general-purpose speech representation learning.
Specifically, we propose to use generative learning approaches to capture fine-grained information at small time scales and use discriminative learning approaches to distill coarse-grained or semantic information at large time scales.
arXiv Detail & Related papers (2021-02-03T08:13:21Z) - A survey of joint intent detection and slot-filling models in natural
language understanding [0.0]
This article is a compilation of past work in natural language understanding, especially joint intent classification and slot filling.
In this article, we describe trends, approaches, issues, data sets, evaluation metrics in intent classification and slot filling.
arXiv Detail & Related papers (2021-01-20T12:15:11Z) - Dynamic Semantic Matching and Aggregation Network for Few-shot Intent
Detection [69.2370349274216]
Few-shot Intent Detection is challenging due to the scarcity of available annotated utterances.
Semantic components are distilled from utterances via multi-head self-attention.
Our method provides a comprehensive matching measure to enhance representations of both labeled and unlabeled instances.
arXiv Detail & Related papers (2020-10-06T05:16:38Z) - Composed Variational Natural Language Generation for Few-shot Intents [118.37774762596123]
We generate training examples for few-shot intents in the realistic imbalanced scenario.
To evaluate the quality of the generated utterances, experiments are conducted on the generalized few-shot intent detection task.
Our proposed model achieves state-of-the-art performances on two real-world intent detection datasets.
arXiv Detail & Related papers (2020-09-21T17:48:43Z) - Leveraging Adversarial Training in Self-Learning for Cross-Lingual Text
Classification [52.69730591919885]
We present a semi-supervised adversarial training process that minimizes the maximal loss for label-preserving input perturbations.
We observe significant gains in effectiveness on document and intent classification for a diverse set of languages.
arXiv Detail & Related papers (2020-07-29T19:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.