ConQX: Semantic Expansion of Spoken Queries for Intent Detection based
on Conditioned Text Generation
- URL: http://arxiv.org/abs/2109.00729v1
- Date: Thu, 2 Sep 2021 05:57:07 GMT
- Title: ConQX: Semantic Expansion of Spoken Queries for Intent Detection based
on Conditioned Text Generation
- Authors: Eyup Halit Yilmaz and Cagri Toraman
- Abstract summary: We propose a method for semantic expansion of spoken queries, called ConQX.
To avoid off-topic text generation, we condition the input query to a structured context with prompt mining.
We then apply zero-shot, one-shot, and few-shot learning to fine-tune BERT and RoBERTa for intent detection.
- Score: 4.264192013842096
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Intent detection of spoken queries is a challenging task due to their noisy
structure and short length. To provide additional information regarding the
query and enhance the performance of intent detection, we propose a method for
semantic expansion of spoken queries, called ConQX, which utilizes the text
generation ability of an auto-regressive language model, GPT-2. To avoid
off-topic text generation, we condition the input query to a structured context
with prompt mining. We then apply zero-shot, one-shot, and few-shot learning.
We lastly use the expanded queries to fine-tune BERT and RoBERTa for intent
detection. The experimental results show that the performance of intent
detection can be improved by our semantic expansion method.
Related papers
- Attacking Misinformation Detection Using Adversarial Examples Generated by Language Models [0.0]
We investigate the challenge of generating adversarial examples to test the robustness of text classification algorithms.
We focus on simulation of content moderation by setting realistic limits on the number of queries an attacker is allowed to attempt.
arXiv Detail & Related papers (2024-10-28T11:46:30Z) - QAEA-DR: A Unified Text Augmentation Framework for Dense Retrieval [12.225881591629815]
In dense retrieval, embedding long texts into dense vectors can result in information loss, leading to inaccurate query-text matching.
Recent studies mainly focus on improving the sentence embedding model or retrieval process.
We introduce a novel text augmentation framework for dense retrieval, which transforms raw documents into information-dense text formats.
arXiv Detail & Related papers (2024-07-29T17:39:08Z) - Dense X Retrieval: What Retrieval Granularity Should We Use? [56.90827473115201]
Often-overlooked design choice is the retrieval unit in which the corpus is indexed, e.g. document, passage, or sentence.
We introduce a novel retrieval unit, proposition, for dense retrieval.
Experiments reveal that indexing a corpus by fine-grained units such as propositions significantly outperforms passage-level units in retrieval tasks.
arXiv Detail & Related papers (2023-12-11T18:57:35Z) - Walking Down the Memory Maze: Beyond Context Limit through Interactive
Reading [63.93888816206071]
We introduce MemWalker, a method that processes the long context into a tree of summary nodes. Upon receiving a query, the model navigates this tree in search of relevant information, and responds once it gathers sufficient information.
We show that, beyond effective reading, MemWalker enhances explainability by highlighting the reasoning steps as it interactively reads the text; pinpointing the relevant text segments related to the query.
arXiv Detail & Related papers (2023-10-08T06:18:14Z) - QAID: Question Answering Inspired Few-shot Intent Detection [5.516275800944541]
We reformulate intent detection as a question-answering retrieval task by treating utterances and intent names as questions and answers.
Our results on three few-shot intent detection benchmarks achieve state-of-the-art performance.
arXiv Detail & Related papers (2023-03-02T21:35:15Z) - Semantic Parsing for Conversational Question Answering over Knowledge
Graphs [63.939700311269156]
We develop a dataset where user questions are annotated with Sparql parses and system answers correspond to execution results thereof.
We present two different semantic parsing approaches and highlight the challenges of the task.
Our dataset and models are released at https://github.com/Edinburgh/SPICE.
arXiv Detail & Related papers (2023-01-28T14:45:11Z) - Dense Paraphrasing for Textual Enrichment [7.6233489924270765]
We define the process of rewriting a textual expression (lexeme or phrase) such that it reduces ambiguity while also making explicit the underlying semantics that is not (necessarily) expressed in the economy of sentence structure as Dense Paraphrasing (DP)
We build the first complete DP dataset, provide the scope and design of the annotation task, and present results demonstrating how this DP process can enrich a source text to improve inferencing and QA task performance.
arXiv Detail & Related papers (2022-10-20T19:58:31Z) - Graph Enhanced BERT for Query Understanding [55.90334539898102]
query understanding plays a key role in exploring users' search intents and facilitating users to locate their most desired information.
In recent years, pre-trained language models (PLMs) have advanced various natural language processing tasks.
We propose a novel graph-enhanced pre-training framework, GE-BERT, which can leverage both query content and the query graph.
arXiv Detail & Related papers (2022-04-03T16:50:30Z) - Generation-Augmented Retrieval for Open-domain Question Answering [134.27768711201202]
Generation-Augmented Retrieval (GAR) for answering open-domain questions.
We show that generating diverse contexts for a query is beneficial as fusing their results consistently yields better retrieval accuracy.
GAR achieves state-of-the-art performance on Natural Questions and TriviaQA datasets under the extractive QA setup when equipped with an extractive reader.
arXiv Detail & Related papers (2020-09-17T23:08:01Z) - Query Understanding via Intent Description Generation [75.64800976586771]
We propose a novel Query-to-Intent-Description (Q2ID) task for query understanding.
Unlike existing ranking tasks which leverage the query and its description to compute the relevance of documents, Q2ID is a reverse task which aims to generate a natural language intent description.
We demonstrate the effectiveness of our model by comparing with several state-of-the-art generation models on the Q2ID task.
arXiv Detail & Related papers (2020-08-25T08:56:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.