A Framework to Generate High-Quality Datapoints for Multiple Novel
Intent Detection
- URL: http://arxiv.org/abs/2205.02005v1
- Date: Wed, 4 May 2022 11:32:15 GMT
- Title: A Framework to Generate High-Quality Datapoints for Multiple Novel
Intent Detection
- Authors: Ankan Mullick, Sukannya Purkayastha, Pawan Goyal and Niloy Ganguly
- Abstract summary: MNID is a framework to detect multiple novel intents with budgeted human annotation cost.
It outperforms the baseline methods in terms of accuracy and F1-score.
- Score: 24.14668837496296
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Systems like Voice-command based conversational agents are characterized by a
pre-defined set of skills or intents to perform user specified tasks. In the
course of time, newer intents may emerge requiring retraining. However, the
newer intents may not be explicitly announced and need to be inferred
dynamically. Thus, there are two important tasks at hand (a). identifying
emerging new intents, (b). annotating data of the new intents so that the
underlying classifier can be retrained efficiently. The tasks become specially
challenging when a large number of new intents emerge simultaneously and there
is a limited budget of manual annotation. In this paper, we propose MNID
(Multiple Novel Intent Detection) which is a cluster based framework to detect
multiple novel intents with budgeted human annotation cost. Empirical results
on various benchmark datasets (of different sizes) demonstrate that MNID, by
intelligently using the budget for annotation, outperforms the baseline methods
in terms of accuracy and F1-score.
Related papers
- Exploiting Unlabeled Data with Multiple Expert Teachers for Open Vocabulary Aerial Object Detection and Its Orientation Adaptation [58.37525311718006]
We put forth a novel formulation of the aerial object detection problem, namely open-vocabulary aerial object detection (OVAD)
We propose CastDet, a CLIP-activated student-teacher detection framework that serves as the first OVAD detector specifically designed for the challenging aerial scenario.
Our framework integrates a robust localization teacher along with several box selection strategies to generate high-quality proposals for novel objects.
arXiv Detail & Related papers (2024-11-04T12:59:13Z) - Open-Vocabulary Object Detection with Meta Prompt Representation and Instance Contrastive Optimization [63.66349334291372]
We propose a framework with Meta prompt and Instance Contrastive learning (MIC) schemes.
Firstly, we simulate a novel-class-emerging scenario to help the prompt that learns class and background prompts generalize to novel classes.
Secondly, we design an instance-level contrastive strategy to promote intra-class compactness and inter-class separation, which benefits generalization of the detector to novel class objects.
arXiv Detail & Related papers (2024-03-14T14:25:10Z) - IntenDD: A Unified Contrastive Learning Approach for Intent Detection
and Discovery [12.905097743551774]
We propose IntenDD, a unified approach leveraging a shared utterance encoding backbone.
IntenDD uses an entirely unsupervised contrastive learning strategy for representation learning.
We find that our approach consistently outperforms competitive baselines across all three tasks.
arXiv Detail & Related papers (2023-10-25T16:50:24Z) - Visual Recognition by Request [111.94887516317735]
We present a novel protocol of annotation and evaluation for visual recognition.
It does not require the labeler/algorithm to annotate/recognize all targets (objects, parts, etc.) at once, but instead raises a number of recognition instructions and the algorithm recognizes targets by request.
We evaluate the recognition system on two mixed-annotated datasets, CPP and ADE20K, and demonstrate its promising ability of learning from partially labeled data.
arXiv Detail & Related papers (2022-07-28T16:55:11Z) - New Intent Discovery with Pre-training and Contrastive Learning [21.25371293641141]
New intent discovery aims to uncover novel intent categories from user utterances to expand the set of supported intent classes.
Existing approaches typically rely on a large amount of labeled utterances.
We propose a new contrastive loss to exploit self-supervisory signals in unlabeled data for clustering.
arXiv Detail & Related papers (2022-05-25T17:07:25Z) - Incremental-DETR: Incremental Few-Shot Object Detection via
Self-Supervised Learning [60.64535309016623]
We propose the Incremental-DETR that does incremental few-shot object detection via fine-tuning and self-supervised learning on the DETR object detector.
To alleviate severe over-fitting with few novel class data, we first fine-tune the class-specific components of DETR with self-supervision.
We further introduce a incremental few-shot fine-tuning strategy with knowledge distillation on the class-specific components of DETR to encourage the network in detecting novel classes without catastrophic forgetting.
arXiv Detail & Related papers (2022-05-09T05:08:08Z) - Detection, Disambiguation, Re-ranking: Autoregressive Entity Linking as
a Multi-Task Problem [46.028180604304985]
We propose an autoregressive entity linking model, that is trained with two auxiliary tasks, and learns to re-rank generated samples at inference time.
We show through ablation studies that each of the two auxiliary tasks increases performance, and that re-ranking is an important factor to the increase.
arXiv Detail & Related papers (2022-04-12T17:55:22Z) - Continuous representations of intents for dialogue systems [10.031004070657122]
Up until recently the focus has been on detecting a fixed, discrete, number of seen intents.
Recent years have seen some work done on unseen intent detection in the context of zero-shot learning.
This paper proposes a novel model where intents are continuous points placed in a specialist Intent Space.
arXiv Detail & Related papers (2021-05-08T15:08:20Z) - Query Understanding via Intent Description Generation [75.64800976586771]
We propose a novel Query-to-Intent-Description (Q2ID) task for query understanding.
Unlike existing ranking tasks which leverage the query and its description to compute the relevance of documents, Q2ID is a reverse task which aims to generate a natural language intent description.
We demonstrate the effectiveness of our model by comparing with several state-of-the-art generation models on the Q2ID task.
arXiv Detail & Related papers (2020-08-25T08:56:40Z) - XtarNet: Learning to Extract Task-Adaptive Representation for
Incremental Few-Shot Learning [24.144499302568565]
We propose XtarNet, which learns to extract task-adaptive representation (TAR) for facilitating incremental few-shot learning.
The TAR contains effective information for classifying both novel and base categories.
XtarNet achieves state-of-the-art incremental few-shot learning performance.
arXiv Detail & Related papers (2020-03-19T04:02:44Z) - Efficient Intent Detection with Dual Sentence Encoders [53.16532285820849]
We introduce intent detection methods backed by pretrained dual sentence encoders such as USE and ConveRT.
We demonstrate the usefulness and wide applicability of the proposed intent detectors, showing that they outperform intent detectors based on fine-tuning the full BERT-Large model.
We release our code, as well as a new challenging single-domain intent detection dataset.
arXiv Detail & Related papers (2020-03-10T15:33:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.