Knowledge-Augmented Large Language Models for Personalized Contextual
Query Suggestion
- URL: http://arxiv.org/abs/2311.06318v2
- Date: Mon, 19 Feb 2024 12:05:28 GMT
- Title: Knowledge-Augmented Large Language Models for Personalized Contextual
Query Suggestion
- Authors: Jinheon Baek, Nirupama Chandrasekaran, Silviu Cucerzan, Allen herring,
Sujay Kumar Jauhar
- Abstract summary: We construct an entity-centric knowledge store for each user based on their search and browsing activities on the web.
This knowledge store is light-weight, since it only produces user-specific aggregate projections of interests and knowledge onto public knowledge graphs.
- Score: 16.563311988191636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) excel at tackling various natural language
tasks. However, due to the significant costs involved in re-training or
fine-tuning them, they remain largely static and difficult to personalize.
Nevertheless, a variety of applications could benefit from generations that are
tailored to users' preferences, goals, and knowledge. Among them is web search,
where knowing what a user is trying to accomplish, what they care about, and
what they know can lead to improved search experiences. In this work, we
propose a novel and general approach that augments an LLM with relevant context
from users' interaction histories with a search engine in order to personalize
its outputs. Specifically, we construct an entity-centric knowledge store for
each user based on their search and browsing activities on the web, which is
then leveraged to provide contextually relevant LLM prompt augmentations. This
knowledge store is light-weight, since it only produces user-specific aggregate
projections of interests and knowledge onto public knowledge graphs, and
leverages existing search log infrastructure, thereby mitigating the privacy,
compliance, and scalability concerns associated with building deep user
profiles for personalization. We validate our approach on the task of
contextual query suggestion, which requires understanding not only the user's
current search context but also what they historically know and care about.
Through a number of experiments based on human evaluation, we show that our
approach is significantly better than several other LLM-powered baselines,
generating query suggestions that are contextually more relevant, personalized,
and useful.
Related papers
- Remember, Retrieve and Generate: Understanding Infinite Visual Concepts as Your Personalized Assistant [53.304699445700926]
We introduce the Retrieval Augmented Personalization (RAP) framework for MLLMs' personalization.
RAP allows real-time concept editing via updating the external database.
RAP-MLLMs can generalize to infinite visual concepts without additional finetuning.
arXiv Detail & Related papers (2024-10-17T09:10:26Z) - PersonalLLM: Tailoring LLMs to Individual Preferences [11.717169516971856]
We present a public benchmark, PersonalLLM, focusing on adapting LLMs to provide maximal benefits for a particular user.
We curate open-ended prompts paired with many high-quality answers over which users would be expected to display heterogeneous latent preferences.
Our dataset and generated personalities offer an innovative testbed for developing personalization algorithms.
arXiv Detail & Related papers (2024-09-30T13:55:42Z) - Towards Boosting LLMs-driven Relevance Modeling with Progressive Retrieved Behavior-augmented Prompting [23.61061000692023]
This study proposes leveraging user interactions recorded in search logs to yield insights into users' implicit search intentions.
We propose ProRBP, a novel Progressive Retrieved Behavior-augmented Prompting framework for integrating search scenario-oriented knowledge with Large Language Models.
arXiv Detail & Related papers (2024-08-18T11:07:38Z) - Persona-DB: Efficient Large Language Model Personalization for Response Prediction with Collaborative Data Refinement [79.2400720115588]
We introduce Persona-DB, a simple yet effective framework consisting of a hierarchical construction process to improve generalization across task contexts.
In the evaluation of response prediction, Persona-DB demonstrates superior context efficiency in maintaining accuracy with a significantly reduced retrieval size.
Our experiments also indicate a marked improvement of over 10% under cold-start scenarios, when users have extremely sparse data.
arXiv Detail & Related papers (2024-02-16T20:20:43Z) - Generative Multi-Modal Knowledge Retrieval with Large Language Models [75.70313858231833]
We propose an innovative end-to-end generative framework for multi-modal knowledge retrieval.
Our framework takes advantage of the fact that large language models (LLMs) can effectively serve as virtual knowledge bases.
We demonstrate significant improvements ranging from 3.0% to 14.6% across all evaluation metrics when compared to strong baselines.
arXiv Detail & Related papers (2024-01-16T08:44:29Z) - Beyond ChatBots: ExploreLLM for Structured Thoughts and Personalized
Model Responses [35.74453152447319]
ExploreLLM allows users to structure thoughts, help explore different options, navigate through the choices and recommendations, and to more easily steer models to generate more personalized responses.
We conduct a user study and show that users find it helpful to use ExploreLLM for exploratory or planning tasks, because it provides a useful schema-like structure to the task, and guides users in planning.
The study also suggests that users can more easily personalize responses with high-level preferences with ExploreLLM.
arXiv Detail & Related papers (2023-12-01T18:31:28Z) - DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain
Question Answering over Knowledge Base and Text [73.68051228972024]
Large Language Models (LLMs) have exhibited impressive generation capabilities, but they suffer from hallucinations when relying on their internal knowledge.
Retrieval-augmented LLMs have emerged as a potential solution to ground LLMs in external knowledge.
arXiv Detail & Related papers (2023-10-31T04:37:57Z) - Integrating Summarization and Retrieval for Enhanced Personalization via
Large Language Models [11.950478880423733]
Personalization is an essential factor in user experience with natural language processing (NLP) systems.
With the emergence of Large Language Models (LLMs), a key question is how to leverage these models to better personalize user experiences.
We propose a novel summary-augmented personalization with task-aware user summaries generated by LLMs.
arXiv Detail & Related papers (2023-10-30T23:40:41Z) - Exploring Large Language Model for Graph Data Understanding in Online
Job Recommendations [63.19448893196642]
We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs.
By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users.
arXiv Detail & Related papers (2023-07-10T11:29:41Z) - Graph Enhanced BERT for Query Understanding [55.90334539898102]
query understanding plays a key role in exploring users' search intents and facilitating users to locate their most desired information.
In recent years, pre-trained language models (PLMs) have advanced various natural language processing tasks.
We propose a novel graph-enhanced pre-training framework, GE-BERT, which can leverage both query content and the query graph.
arXiv Detail & Related papers (2022-04-03T16:50:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.