Query Understanding in the Age of Large Language Models
- URL: http://arxiv.org/abs/2306.16004v1
- Date: Wed, 28 Jun 2023 08:24:14 GMT
- Title: Query Understanding in the Age of Large Language Models
- Authors: Avishek Anand, Venktesh V, Abhijit Anand, Vinay Setty
- Abstract summary: We describe a generic framework for interactive query-rewriting using large-language models (LLM)
A key aspect of our framework is the ability of the rewriter to fully specify the machine intent by the search engine in natural language.
We detail the concept, backed by initial experiments, along with open questions for this interactive query understanding framework.
- Score: 6.630482733703617
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Querying, conversing, and controlling search and information-seeking
interfaces using natural language are fast becoming ubiquitous with the rise
and adoption of large-language models (LLM). In this position paper, we
describe a generic framework for interactive query-rewriting using LLMs. Our
proposal aims to unfold new opportunities for improved and transparent intent
understanding while building high-performance retrieval systems using LLMs. A
key aspect of our framework is the ability of the rewriter to fully specify the
machine intent by the search engine in natural language that can be further
refined, controlled, and edited before the final retrieval phase. The ability
to present, interact, and reason over the underlying machine intent in natural
language has profound implications on transparency, ranking performance, and a
departure from the traditional way in which supervised signals were collected
for understanding intents. We detail the concept, backed by initial
experiments, along with open questions for this interactive query understanding
framework.
Related papers
- RuAG: Learned-rule-augmented Generation for Large Language Models [62.64389390179651]
We propose a novel framework, RuAG, to automatically distill large volumes of offline data into interpretable first-order logic rules.
We evaluate our framework on public and private industrial tasks, including natural language processing, time-series, decision-making, and industrial tasks.
arXiv Detail & Related papers (2024-11-04T00:01:34Z) - Large Language Models are Good Multi-lingual Learners : When LLMs Meet Cross-lingual Prompts [5.520335305387487]
We propose a novel prompting strategy Multi-Lingual Prompt, namely MLPrompt.
MLPrompt translates the error-prone rule that an LLM struggles to follow into another language, thus drawing greater attention to it.
We introduce a framework integrating MLPrompt with an auto-checking mechanism for structured data generation, with a specific case study in text-to-MIP instances.
arXiv Detail & Related papers (2024-09-17T10:33:27Z) - Language Representations Can be What Recommenders Need: Findings and Potentials [57.90679739598295]
We show that item representations, when linearly mapped from advanced LM representations, yield superior recommendation performance.
This outcome suggests the possible homomorphism between the advanced language representation space and an effective item representation space for recommendation.
Our findings highlight the connection between language modeling and behavior modeling, which can inspire both natural language processing and recommender system communities.
arXiv Detail & Related papers (2024-07-07T17:05:24Z) - Redefining Information Retrieval of Structured Database via Large Language Models [10.117751707641416]
This paper introduces a novel retrieval augmentation framework called ChatLR.
It primarily employs the powerful semantic understanding ability of Large Language Models (LLMs) as retrievers to achieve precise and concise information retrieval.
Experimental results demonstrate the effectiveness of ChatLR in addressing user queries, achieving an overall information retrieval accuracy exceeding 98.8%.
arXiv Detail & Related papers (2024-05-09T02:37:53Z) - Navigating the Knowledge Sea: Planet-scale answer retrieval using LLMs [0.0]
Information retrieval is characterized by a continuous refinement of techniques and technologies.
This paper focuses on the role of Large Language Models (LLMs) in bridging the gap between traditional search methods and the emerging paradigm of answer retrieval.
arXiv Detail & Related papers (2024-02-07T23:39:40Z) - Large Language User Interfaces: Voice Interactive User Interfaces powered by LLMs [5.06113628525842]
We present a framework that can serve as an intermediary between a user and their user interface (UI)
We employ a system that stands upon textual semantic mappings of UI components, in the form of annotations.
Our engine can classify the most appropriate application, extract relevant parameters, and subsequently execute precise predictions of the user's expected actions.
arXiv Detail & Related papers (2024-02-07T21:08:49Z) - Generative Context-aware Fine-tuning of Self-supervised Speech Models [54.389711404209415]
We study the use of generative large language models (LLM) generated context information.
We propose an approach to distill the generated information during fine-tuning of self-supervised speech models.
We evaluate the proposed approach using the SLUE and Libri-light benchmarks for several downstream tasks: automatic speech recognition, named entity recognition, and sentiment analysis.
arXiv Detail & Related papers (2023-12-15T15:46:02Z) - Large Search Model: Redefining Search Stack in the Era of LLMs [63.503320030117145]
We introduce a novel conceptual framework called large search model, which redefines the conventional search stack by unifying search tasks with one large language model (LLM)
All tasks are formulated as autoregressive text generation problems, allowing for the customization of tasks through the use of natural language prompts.
This proposed framework capitalizes on the strong language understanding and reasoning capabilities of LLMs, offering the potential to enhance search result quality while simultaneously simplifying the existing cumbersome search stack.
arXiv Detail & Related papers (2023-10-23T05:52:09Z) - Large Language Models for Information Retrieval: A Survey [58.30439850203101]
Information retrieval has evolved from term-based methods to its integration with advanced neural models.
Recent research has sought to leverage large language models (LLMs) to improve IR systems.
We delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers.
arXiv Detail & Related papers (2023-08-14T12:47:22Z) - RET-LLM: Towards a General Read-Write Memory for Large Language Models [53.288356721954514]
RET-LLM is a novel framework that equips large language models with a general write-read memory unit.
Inspired by Davidsonian semantics theory, we extract and save knowledge in the form of triplets.
Our framework exhibits robust performance in handling temporal-based question answering tasks.
arXiv Detail & Related papers (2023-05-23T17:53:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.