PRISP: Privacy-Safe Few-Shot Personalization via Lightweight Adaptation
- URL: http://arxiv.org/abs/2601.06471v1
- Date: Sat, 10 Jan 2026 07:34:28 GMT
- Title: PRISP: Privacy-Safe Few-Shot Personalization via Lightweight Adaptation
- Authors: Junho Park, Dohoon Kim, Taesup Moon,
- Abstract summary: PRISP is a lightweight and privacy-safe personalization framework.<n>It exploits a Text-to-LoRA hypernetwork to generate task-aware LoRA parameters from task descriptions.<n>Experiments on a few-shot variant of the LaMP benchmark demonstrate that PRISP achieves strong overall performance.
- Score: 21.467360472787593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language model (LLM) personalization aims to adapt general-purpose models to individual users. Most existing methods, however, are developed under data-rich and resource-abundant settings, often incurring privacy risks. In contrast, realistic personalization typically occurs after deployment under (i) extremely limited user data, (ii) constrained computational resources, and (iii) strict privacy requirements. We propose PRISP, a lightweight and privacy-safe personalization framework tailored to these constraints. PRISP leverages a Text-to-LoRA hypernetwork to generate task-aware LoRA parameters from task descriptions, and enables efficient user personalization by optimizing a small subset of task-aware LoRA parameters together with minimal additional modules using few-shot user data. Experiments on a few-shot variant of the LaMP benchmark demonstrate that PRISP achieves strong overall performance compared to prior approaches, while reducing computational overhead and eliminating privacy risks.
Related papers
- Synthetic Interaction Data for Scalable Personalization in Large Language Models [67.31884245564086]
We introduce a high-fidelity synthetic data generation framework called PersonaGym.<n>Unlike prior work that treats personalization as static persona-preference pairs, PersonaGym models a dynamic preference process.<n>We release PersonaAtlas, a large-scale, high-quality, and diverse synthetic dataset of high-fidelity multi-turn personalized interaction trajectories.
arXiv Detail & Related papers (2026-02-12T20:41:22Z) - Operationalizing Data Minimization for Privacy-Preserving LLM Prompting [10.031739933859622]
Large language models (LLMs) in consumer applications have led to frequent exchanges of personal information.<n>We present a framework to formally define and operationalize data minimization.<n>We evaluate the framework on four datasets spanning open-ended conversations and knowledge-intensive tasks.
arXiv Detail & Related papers (2025-10-04T04:20:18Z) - CBP-Tuning: Efficient Local Customization for Black-box Large Language Models [23.249724558362136]
We propose CBP-Tuning, a novel framework that facilitates efficient local customization while preserving bidirectional privacy.<n>Specifically, we design a two-stage framework: (1) a prompt generator trained on the server-side to capture domain-specific and task-agnostic capabilities, and (2) user-side gradient-free optimization that tailors soft prompts for individual tasks.<n>This approach eliminates the need for users to access model weights or upload private data, requiring only a single customized vector per task while achieving effective adaptation.
arXiv Detail & Related papers (2025-09-15T16:41:08Z) - Semantic-guided LoRA Parameters Generation [22.648880814012184]
Low-Rank Adaptation (LoRA) has demonstrated strong generalization capabilities across a variety of tasks for efficiently fine-tuning AI models.<n>SG-LoRA is the first of its kind framework to efficiently produce user-specific LoRA without additional training on user tasks or access to user-specific data.<n>SG-LoRA enables the real-time construction of LoRA models aligned with individual intents by distilling knowledge from prominent LoRA experts.
arXiv Detail & Related papers (2025-09-05T14:43:41Z) - Personalized Language Models via Privacy-Preserving Evolutionary Model Merging [53.97323896430374]
Personalization in language models aims to tailor model behavior to individual users or user groups.<n>We propose Privacy-Preserving Model Merging via Evolutionary Algorithms (PriME)<n>PriME employs gradient-free methods to directly optimize utility while reducing privacy risks.<n>Experiments on the LaMP benchmark show that PriME consistently outperforms a range of baselines, achieving up to a 45% improvement in task performance.
arXiv Detail & Related papers (2025-03-23T09:46:07Z) - FedSpaLLM: Federated Pruning of Large Language Models [8.45879077052023]
Large Language Models (LLMs) achieve state-of-the-art performance but are challenging to deploy due to their high computational and storage demands.<n>We propose FedSpaLLM, the first federated learning framework designed specifically for pruning LLMs.
arXiv Detail & Related papers (2024-10-18T20:33:12Z) - A Middle Path for On-Premises LLM Deployment: Preserving Privacy Without Sacrificing Model Confidentiality [20.646221081945523]
Privacy-sensitive users require deploying large language models (LLMs) within their own infrastructure (on-premises) to safeguard private data and enable customization.<n>Previous research on small models has explored securing only the output layer within hardware-secured devices to balance model confidentiality and customization.<n>We propose SOLID, a novel deployment framework that secures a few bottom layers in a secure environment and introduces an efficient metric to optimize the trade-off.
arXiv Detail & Related papers (2024-10-15T02:00:36Z) - Lifelong Personalized Low-Rank Adaptation of Large Language Models for Recommendation [50.837277466987345]
We focus on the field of large language models (LLMs) for recommendation.
We propose RecLoRA, which incorporates a Personalized LoRA module that maintains independent LoRAs for different users.
We also design a Few2Many Learning Strategy, using a conventional recommendation model as a lens to magnify small training spaces to full spaces.
arXiv Detail & Related papers (2024-08-07T04:20:28Z) - Robust Utility-Preserving Text Anonymization Based on Large Language Models [80.5266278002083]
Anonymizing text that contains sensitive information is crucial for a wide range of applications.<n>Existing techniques face the emerging challenges of the re-identification ability of large language models.<n>We propose a framework composed of three key components: a privacy evaluator, a utility evaluator, and an optimization component.
arXiv Detail & Related papers (2024-07-16T14:28:56Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - The Data Minimization Principle in Machine Learning [61.17813282782266]
Data minimization aims to reduce the amount of data collected, processed or retained.
It has been endorsed by various global data protection regulations.
However, its practical implementation remains a challenge due to the lack of a rigorous formulation.
arXiv Detail & Related papers (2024-05-29T19:40:27Z) - Unsupervised Model Personalization while Preserving Privacy and
Scalability: An Open Problem [55.21502268698577]
This work investigates the task of unsupervised model personalization, adapted to continually evolving, unlabeled local user images.
We provide a novel Dual User-Adaptation framework (DUA) to explore the problem.
This framework flexibly disentangles user-adaptation into model personalization on the server and local data regularization on the user device.
arXiv Detail & Related papers (2020-03-30T09:35:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.