QAID: Question Answering Inspired Few-shot Intent Detection
- URL: http://arxiv.org/abs/2303.01593v2
- Date: Tue, 21 Mar 2023 14:22:00 GMT
- Title: QAID: Question Answering Inspired Few-shot Intent Detection
- Authors: Asaf Yehudai, Matan Vetzler, Yosi Mass, Koren Lazar, Doron Cohen, Boaz
Carmeli
- Abstract summary: We reformulate intent detection as a question-answering retrieval task by treating utterances and intent names as questions and answers.
Our results on three few-shot intent detection benchmarks achieve state-of-the-art performance.
- Score: 5.516275800944541
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intent detection with semantically similar fine-grained intents is a
challenging task. To address it, we reformulate intent detection as a
question-answering retrieval task by treating utterances and intent names as
questions and answers. To that end, we utilize a question-answering retrieval
architecture and adopt a two stages training schema with batch contrastive
loss. In the pre-training stage, we improve query representations through
self-supervised training. Then, in the fine-tuning stage, we increase
contextualized token-level similarity scores between queries and answers from
the same intent. Our results on three few-shot intent detection benchmarks
achieve state-of-the-art performance.
Related papers
- QUIDS: Query Intent Generation via Dual Space Modeling [12.572815037915348]
We propose a dual-space model that uses semantic relevance and irrelevance information in the returned documents to explain the understanding of the query intent.
Our methods produce high-quality query intent descriptions, outperforming existing methods for this task, as well as state-of-the-art query-based summarization methods.
arXiv Detail & Related papers (2024-10-16T09:28:58Z) - Clarify When Necessary: Resolving Ambiguity Through Interaction with LMs [58.620269228776294]
We propose a task-agnostic framework for resolving ambiguity by asking users clarifying questions.
We evaluate systems across three NLP applications: question answering, machine translation and natural language inference.
We find that intent-sim is robust, demonstrating improvements across a wide range of NLP tasks and LMs.
arXiv Detail & Related papers (2023-11-16T00:18:50Z) - I3: Intent-Introspective Retrieval Conditioned on Instructions [83.91776238599824]
I3 is a unified retrieval system that performs Intent-Introspective retrieval across various tasks conditioned on Instructions without task-specific training.
I3 incorporates a pluggable introspector in a parameter-isolated manner to comprehend specific retrieval intents.
It utilizes extensive LLM-generated data to train I3 phase-by-phase, embodying two key designs: progressive structure pruning and drawback-based data refinement.
arXiv Detail & Related papers (2023-08-19T14:17:57Z) - ReAct: Temporal Action Detection with Relational Queries [84.76646044604055]
This work aims at advancing temporal action detection (TAD) using an encoder-decoder framework with action queries.
We first propose a relational attention mechanism in the decoder, which guides the attention among queries based on their relations.
Lastly, we propose to predict the localization quality of each action query at inference in order to distinguish high-quality queries.
arXiv Detail & Related papers (2022-07-14T17:46:37Z) - Few-Shot Stance Detection via Target-Aware Prompt Distillation [48.40269795901453]
This paper is inspired by the potential capability of pre-trained language models (PLMs) serving as knowledge bases and few-shot learners.
PLMs can provide essential contextual information for the targets and enable few-shot learning via prompts.
Considering the crucial role of the target in stance detection task, we design target-aware prompts and propose a novel verbalizer.
arXiv Detail & Related papers (2022-06-27T12:04:14Z) - Few-Shot Intent Detection via Contrastive Pre-Training and Fine-Tuning [27.154414939086426]
We present a simple yet effective few-shot intent detection schema via contrastive pre-training and fine-tuning.
We first conduct self-supervised contrastive pre-training on collected intent datasets, which implicitly learns to discriminate semantically similar utterances.
We then perform few-shot intent detection together with supervised contrastive learning, which explicitly pulls utterances from the same intent closer.
arXiv Detail & Related papers (2021-09-13T22:28:58Z) - ConQX: Semantic Expansion of Spoken Queries for Intent Detection based
on Conditioned Text Generation [4.264192013842096]
We propose a method for semantic expansion of spoken queries, called ConQX.
To avoid off-topic text generation, we condition the input query to a structured context with prompt mining.
We then apply zero-shot, one-shot, and few-shot learning to fine-tune BERT and RoBERTa for intent detection.
arXiv Detail & Related papers (2021-09-02T05:57:07Z) - Dynamic Semantic Matching and Aggregation Network for Few-shot Intent
Detection [69.2370349274216]
Few-shot Intent Detection is challenging due to the scarcity of available annotated utterances.
Semantic components are distilled from utterances via multi-head self-attention.
Our method provides a comprehensive matching measure to enhance representations of both labeled and unlabeled instances.
arXiv Detail & Related papers (2020-10-06T05:16:38Z) - Improving Weakly Supervised Visual Grounding by Contrastive Knowledge
Distillation [55.198596946371126]
We propose a contrastive learning framework that accounts for both region-phrase and image-sentence matching.
Our core innovation is the learning of a region-phrase score function, based on which an image-sentence score function is further constructed.
The design of such score functions removes the need of object detection at test time, thereby significantly reducing the inference cost.
arXiv Detail & Related papers (2020-07-03T22:02:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.