LoFA: Learning to Predict Personalized Priors for Fast Adaptation of Visual Generative Models
- URL: http://arxiv.org/abs/2512.08785v1
- Date: Tue, 09 Dec 2025 16:39:31 GMT
- Title: LoFA: Learning to Predict Personalized Priors for Fast Adaptation of Visual Generative Models
- Authors: Yiming Hao, Mutian Xu, Chongjie Ye, Jie Qin, Shunlin Lu, Yipeng Qin, Xiaoguang Han,
- Abstract summary: Low-Rank Adaptation (LoRA) remains impractical due to their demand for task-specific data and lengthy optimization.<n>We propose LoFA, a general framework that efficiently predicts personalized priors for fast model adaptation.<n>Our method consistently predicts high-quality personalized priors within seconds, across multiple tasks and user prompts, even outperforming conventional LoRA that requires hours of processing.
- Score: 50.46815266062554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personalizing visual generative models to meet specific user needs has gained increasing attention, yet current methods like Low-Rank Adaptation (LoRA) remain impractical due to their demand for task-specific data and lengthy optimization. While a few hypernetwork-based approaches attempt to predict adaptation weights directly, they struggle to map fine-grained user prompts to complex LoRA distributions, limiting their practical applicability. To bridge this gap, we propose LoFA, a general framework that efficiently predicts personalized priors for fast model adaptation. We first identify a key property of LoRA: structured distribution patterns emerge in the relative changes between LoRA and base model parameters. Building on this, we design a two-stage hypernetwork: first predicting relative distribution patterns that capture key adaptation regions, then using these to guide final LoRA weight prediction. Extensive experiments demonstrate that our method consistently predicts high-quality personalized priors within seconds, across multiple tasks and user prompts, even outperforming conventional LoRA that requires hours of processing. Project page: https://jaeger416.github.io/lofa/.
Related papers
- GLAD: Generalizable Tuning for Vision-Language Models [41.071911050087586]
We propose a simpler and more general framework called GLAD (Generalizable LoRA tuning with RegulArized GraDient)<n>We show that merely applying LoRA achieves performance in downstream tasks comparable to current state-of-the-art prompt-based methods.
arXiv Detail & Related papers (2025-07-17T12:58:15Z) - T-LoRA: Single Image Diffusion Model Customization Without Overfitting [2.424910201171407]
This paper tackles the challenging yet most impactful task of adapting a diffusion model using just a single concept image.<n>We introduce T-LoRA, a Timestep-Dependent Low-Rank Adaptation framework specifically designed for diffusion model personalization.<n>We show that higher diffusion timesteps are more prone to overfitting than lower ones, necessitating a timestep-sensitive fine-tuning strategy.
arXiv Detail & Related papers (2025-07-08T13:14:10Z) - Mixture of Low Rank Adaptation with Partial Parameter Sharing for Time Series Forecasting [20.505925622104964]
We show that multi-task forecasting suffers from an Expressiveness Bottleneck, where predictions at different time steps share the same representation.<n>We propose a two-stage framework: first, pre-train a foundation model for one-step-ahead prediction; then, adapt it using step-specific LoRA modules.<n>Experiments show that MoLA significantly improves model expressiveness and outperforms state-of-the-art time-series forecasting methods.
arXiv Detail & Related papers (2025-05-23T13:24:39Z) - SD-LoRA: Scalable Decoupled Low-Rank Adaptation for Class Incremental Learning [73.93639228235622]
Continual Learning with foundation models has emerged as a promising paradigm to exploit abundant knowledge acquired during pre-training for tackling sequential tasks.<n>Existing prompt-based and Low-Rank Adaptation-based (LoRA-based) methods often require expanding a prompt/LoRA pool or retaining samples of previous tasks.<n>We propose Scalable Decoupled LoRA (SD-LoRA) for class incremental learning, which continually separates the learning of the magnitude and direction of LoRA components without rehearsal.
arXiv Detail & Related papers (2025-01-22T20:00:41Z) - Test-Time Alignment via Hypothesis Reweighting [56.71167047381817]
Large pretrained models often struggle with underspecified tasks.<n>We propose a novel framework to address the challenge of aligning models to test-time user intent.
arXiv Detail & Related papers (2024-12-11T23:02:26Z) - LoRA Diffusion: Zero-Shot LoRA Synthesis for Diffusion Model Personalization [0.0]
Low-Rank Adaptation (LoRA) and other parameter-efficient fine-tuning (PEFT) methods provide low-memory, storage-efficient solutions for personalizing text-to-image models.<n>We show that training a hypernetwork model to generate LoRA weights can achieve competitive quality for specific domains.
arXiv Detail & Related papers (2024-12-03T10:17:15Z) - Unlocking Tuning-Free Few-Shot Adaptability in Visual Foundation Models by Recycling Pre-Tuned LoRAs [76.40876036912537]
Large Language Models (LLMs) demonstrate strong few-shot adaptability without requiring fine-tuning.<n>Current Visual Foundation Models (VFMs) require explicit fine-tuning with sufficient tuning data.<n>We propose a framework, LoRA Recycle, that distills a meta-LoRA from diverse pre-tuned LoRAs with a meta-learning objective.
arXiv Detail & Related papers (2024-12-03T07:25:30Z) - IterIS: Iterative Inference-Solving Alignment for LoRA Merging [14.263218227928729]
Low-rank adaptations (LoRAs) are widely used to fine-tune large models across various domains for specific downstream tasks.<n>LoRA merging presents an effective solution by combining multiple LoRAs into a unified adapter while maintaining data privacy.
arXiv Detail & Related papers (2024-11-21T19:04:02Z) - Lifelong Personalized Low-Rank Adaptation of Large Language Models for Recommendation [50.837277466987345]
We focus on the field of large language models (LLMs) for recommendation.
We propose RecLoRA, which incorporates a Personalized LoRA module that maintains independent LoRAs for different users.
We also design a Few2Many Learning Strategy, using a conventional recommendation model as a lens to magnify small training spaces to full spaces.
arXiv Detail & Related papers (2024-08-07T04:20:28Z) - Debiased Fine-Tuning for Vision-language Models by Prompt Regularization [56.48290708901531]
We present a new paradigm for fine-tuning large-scale vision pre-trained models on downstream task, dubbed Prompt Regularization (ProReg)<n>ProReg uses the prediction by prompting the pretrained model to regularize the fine-tuning.<n>We show the consistently strong performance of ProReg compared with conventional fine-tuning, zero-shot prompt, prompt tuning, and other state-of-the-art methods.
arXiv Detail & Related papers (2023-01-29T11:53:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.