HYDRA: Model Factorization Framework for Black-Box LLM Personalization
- URL: http://arxiv.org/abs/2406.02888v3
- Date: Fri, 25 Oct 2024 21:01:05 GMT
- Title: HYDRA: Model Factorization Framework for Black-Box LLM Personalization
- Authors: Yuchen Zhuang, Haotian Sun, Yue Yu, Rushi Qiang, Qifan Wang, Chao Zhang, Bo Dai,
- Abstract summary: Personalization has emerged as a critical research area in modern intelligent systems.
Despite the remarkable few-shot capabilities exhibited by black-box large language models (LLMs), the inherent opacity of their model parameters presents significant challenges in aligning the generated output with individual expectations.
We propose HYDRA, a model factorization framework that captures both user-specific behavior patterns from historical data and shared general knowledge among all users to deliver personalized generation.
- Score: 36.21602686505842
- License:
- Abstract: Personalization has emerged as a critical research area in modern intelligent systems, focusing on mining users' behavioral history and adapting to their preferences for delivering tailored experiences. Despite the remarkable few-shot capabilities exhibited by black-box large language models (LLMs), the inherent opacity of their model parameters presents significant challenges in aligning the generated output with individual expectations. Existing solutions have primarily focused on prompt design to incorporate user-specific profiles and behaviors; however, such approaches often struggle to generalize effectively due to their inability to capture shared knowledge among all users. To address these challenges, we propose HYDRA, a model factorization framework that captures both user-specific behavior patterns from historical data and shared general knowledge among all users to deliver personalized generation. In order to capture user-specific behavior patterns, we first train a reranker to prioritize the most useful information from top-retrieved relevant historical records. By combining the prioritized history with the corresponding query, we train an adapter to align the output with individual user-specific preferences, eliminating the reliance on access to inherent model parameters of black-box LLMs. Both the reranker and the adapter can be decomposed into a base model with multiple user-specific heads, resembling a hydra. The base model maintains shared knowledge across users, while the multiple personal heads capture user-specific preferences. Experimental results demonstrate that HYDRA outperforms existing state-of-the-art prompt-based methods by an average relative improvement of 9.01% across five diverse personalization tasks in the LaMP benchmark. Our implementation is available at https://github.com/night-chen/HYDRA.
Related papers
- GaVaMoE: Gaussian-Variational Gated Mixture of Experts for Explainable Recommendation [55.769720670731516]
GaVaMoE is a novel framework for explainable recommendation.
It generates tailored explanations for specific user types and preferences.
It exhibits robust performance in scenarios with sparse user-item interactions.
arXiv Detail & Related papers (2024-10-15T17:59:30Z) - PersonalLLM: Tailoring LLMs to Individual Preferences [11.717169516971856]
We present a public benchmark, PersonalLLM, focusing on adapting LLMs to provide maximal benefits for a particular user.
We curate open-ended prompts paired with many high-quality answers over which users would be expected to display heterogeneous latent preferences.
Our dataset and generated personalities offer an innovative testbed for developing personalization algorithms.
arXiv Detail & Related papers (2024-09-30T13:55:42Z) - LLMs + Persona-Plug = Personalized LLMs [41.60364110693824]
Personalization plays a critical role in numerous language tasks and applications, since users with the same requirements may prefer diverse outputs based on their individual interests.
This has led to the development of various personalized approaches aimed at adapting large language models (LLMs) to generate customized outputs aligned with user preferences.
We propose a novel personalized LLM model, ours. It constructs a user-specific embedding for each individual by modeling all her historical contexts through a lightweight plug-in user embedder module.
arXiv Detail & Related papers (2024-09-18T11:54:45Z) - PEFT-U: Parameter-Efficient Fine-Tuning for User Personalization [9.594958534074074]
We introduce the PEFT-U Benchmark: a new dataset for building and evaluating NLP models for user personalization.
We explore the challenge of efficiently personalizing LLMs to accommodate user-specific preferences in the context of diverse user-centered tasks.
arXiv Detail & Related papers (2024-07-25T14:36:18Z) - Towards Unified Multi-Modal Personalization: Large Vision-Language Models for Generative Recommendation and Beyond [87.1712108247199]
Our goal is to establish a Unified paradigm for Multi-modal Personalization systems (UniMP)
We develop a generic and personalization generative framework, that can handle a wide range of personalized needs.
Our methodology enhances the capabilities of foundational language models for personalized tasks.
arXiv Detail & Related papers (2024-03-15T20:21:31Z) - Democratizing Large Language Models via Personalized Parameter-Efficient Fine-tuning [36.88126051792774]
Personalization in large language models (LLMs) is increasingly important.
One PEFT Per User (OPPU) employs personalized parameter-efficient fine-tuning (PEFT) modules to store user-specific behavior patterns and preferences.
OPPU significantly outperforms existing prompt-based methods across seven diverse tasks in the LaMP benchmark.
arXiv Detail & Related papers (2024-02-06T21:03:52Z) - Attention Weighted Mixture of Experts with Contrastive Learning for
Personalized Ranking in E-commerce [21.7796124109]
We propose Attention Weighted Mixture of Experts (AW-MoE) with contrastive learning for personalized ranking.
AW-MoE has been successfully deployed in the JD e-commerce search engine.
arXiv Detail & Related papers (2023-06-08T07:59:08Z) - Combining Diverse Feature Priors [90.74601233745047]
We show that models trained with diverse sets of feature priors have less overlapping failure modes.
We also demonstrate that jointly training such models on additional (unlabeled) data allows them to correct each other's mistakes.
arXiv Detail & Related papers (2021-10-15T17:31:10Z) - Unsupervised Model Personalization while Preserving Privacy and
Scalability: An Open Problem [55.21502268698577]
This work investigates the task of unsupervised model personalization, adapted to continually evolving, unlabeled local user images.
We provide a novel Dual User-Adaptation framework (DUA) to explore the problem.
This framework flexibly disentangles user-adaptation into model personalization on the server and local data regularization on the user device.
arXiv Detail & Related papers (2020-03-30T09:35:12Z) - MetaSelector: Meta-Learning for Recommendation with User-Level Adaptive
Model Selection [110.87712780017819]
We propose a meta-learning framework to facilitate user-level adaptive model selection in recommender systems.
We conduct experiments on two public datasets and a real-world production dataset.
arXiv Detail & Related papers (2020-01-22T16:05:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.