Instant Personalized Large Language Model Adaptation via Hypernetwork
- URL: http://arxiv.org/abs/2510.16282v1
- Date: Sat, 18 Oct 2025 00:41:25 GMT
- Title: Instant Personalized Large Language Model Adaptation via Hypernetwork
- Authors: Zhaoxuan Tan, Zixuan Zhang, Haoyang Wen, Zheng Li, Rongzhi Zhang, Pei Chen, Fengran Mo, Zheyuan Liu, Qingkai Zeng, Qingyu Yin, Meng Jiang,
- Abstract summary: Profile-to-PEFT is a scalable framework that employs a hypernetwork, trained end-to-end to map a user's encoded profile directly to a full set of adapter parameters.<n>We show that our method outperforms both prompt-based personalization and OPPU while using substantially fewer computational resources at deployment.
- Score: 56.512539596908745
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personalized large language models (LLMs) tailor content to individual preferences using user profiles or histories. However, existing parameter-efficient fine-tuning (PEFT) methods, such as the ``One-PEFT-Per-User'' (OPPU) paradigm, require training a separate adapter for each user, making them computationally expensive and impractical for real-time updates. We introduce Profile-to-PEFT, a scalable framework that employs a hypernetwork, trained end-to-end, to map a user's encoded profile directly to a full set of adapter parameters (e.g., LoRA), eliminating per-user training at deployment. This design enables instant adaptation, generalization to unseen users, and privacy-preserving local deployment. Experimental results demonstrate that our method outperforms both prompt-based personalization and OPPU while using substantially fewer computational resources at deployment. The framework exhibits strong generalization to out-of-distribution users and maintains robustness across varying user activity levels and different embedding backbones. The proposed Profile-to-PEFT framework enables efficient, scalable, and adaptive LLM personalization suitable for large-scale applications.
Related papers
- Synthetic Interaction Data for Scalable Personalization in Large Language Models [67.31884245564086]
We introduce a high-fidelity synthetic data generation framework called PersonaGym.<n>Unlike prior work that treats personalization as static persona-preference pairs, PersonaGym models a dynamic preference process.<n>We release PersonaAtlas, a large-scale, high-quality, and diverse synthetic dataset of high-fidelity multi-turn personalized interaction trajectories.
arXiv Detail & Related papers (2026-02-12T20:41:22Z) - Don't Start Over: A Cost-Effective Framework for Migrating Personalized Prompts Between LLMs [51.79252689855809]
Personalization in Large Language Models (LLMs) often relies on user-specific soft prompts.<n>We propose the Prompt-level User Migration Adapter (PUMA), a framework to efficiently migrate personalized prompts across incompatible models.<n>Experiments on three large-scale datasets show our method matches or even surpasses the performance of retraining from scratch, reducing computational cost by up to 98%.
arXiv Detail & Related papers (2026-01-17T12:30:31Z) - PersonaAgent: When Large Language Model Agents Meet Personalization at Test Time [87.99027488664282]
PersonaAgent is a framework designed to address versatile personalization tasks.<n>It integrates a personalized memory module and a personalized action module.<n>Test-time user-preference alignment strategy ensures real-time user preference alignment.
arXiv Detail & Related papers (2025-06-06T17:29:49Z) - Flextron: Many-in-One Flexible Large Language Model [85.93260172698398]
We introduce Flextron, a network architecture and post-training model optimization framework supporting flexible model deployment.
We present a sample-efficient training method and associated routing algorithms for transforming an existing trained LLM into a Flextron model.
We demonstrate superior performance over multiple end-to-end trained variants and other state-of-the-art elastic networks, all with a single pretraining run that consumes a mere 7.63% tokens compared to original pretraining.
arXiv Detail & Related papers (2024-06-11T01:16:10Z) - Dual-Personalizing Adapter for Federated Foundation Models [35.863585349109385]
We propose a Federated Dual-Personalizing Adapter architecture to handle test-time distribution shifts simultaneously.<n>The effectiveness of the proposed method has been evaluated on benchmark datasets across different NLP tasks.
arXiv Detail & Related papers (2024-03-28T08:19:33Z) - Democratizing Large Language Models via Personalized Parameter-Efficient Fine-tuning [36.88126051792774]
Personalization in large language models (LLMs) is increasingly important.<n>One PEFT Per User (OPPU) employs personalized parameter-efficient fine-tuning (PEFT) modules to store user-specific behavior patterns and preferences.<n>OPPU significantly outperforms existing prompt-based methods across seven diverse tasks in the LaMP benchmark.
arXiv Detail & Related papers (2024-02-06T21:03:52Z) - Profit: Benchmarking Personalization and Robustness Trade-off in
Federated Prompt Tuning [40.16581292336117]
In many applications of federated learning (FL), clients desire models that are personalized using their local data, yet are also robust in the sense that they retain general global knowledge.
It is critical to understand how to navigate this personalization vs robustness trade-off when designing federated systems.
arXiv Detail & Related papers (2023-10-06T23:46:33Z) - Efficient Model Personalization in Federated Learning via
Client-Specific Prompt Generation [38.42808389088285]
Federated learning (FL) emerges as a decentralized learning framework which trains models from multiple distributed clients without sharing their data to preserve privacy.
We propose a novel personalized FL framework of client-specific Prompt Generation (pFedPG)
pFedPG learns to deploy a personalized prompt generator at the server for producing client-specific visual prompts that efficiently adapts frozen backbones to local data distributions.
arXiv Detail & Related papers (2023-08-29T15:03:05Z) - Visual Prompt Based Personalized Federated Learning [83.04104655903846]
We propose a novel PFL framework for image classification tasks, dubbed pFedPT, that leverages personalized visual prompts to implicitly represent local data distribution information of clients.
Experiments on the CIFAR10 and CIFAR100 datasets show that pFedPT outperforms several state-of-the-art (SOTA) PFL algorithms by a large margin in various settings.
arXiv Detail & Related papers (2023-03-15T15:02:15Z) - Unsupervised Model Personalization while Preserving Privacy and
Scalability: An Open Problem [55.21502268698577]
This work investigates the task of unsupervised model personalization, adapted to continually evolving, unlabeled local user images.
We provide a novel Dual User-Adaptation framework (DUA) to explore the problem.
This framework flexibly disentangles user-adaptation into model personalization on the server and local data regularization on the user device.
arXiv Detail & Related papers (2020-03-30T09:35:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.