Zero-Shot Adaptation of Parameter-Efficient Fine-Tuning in Diffusion Models
- URL: http://arxiv.org/abs/2506.04244v1
- Date: Thu, 29 May 2025 20:37:04 GMT
- Title: Zero-Shot Adaptation of Parameter-Efficient Fine-Tuning in Diffusion Models
- Authors: Farzad Farhadzadeh, Debasmit Das, Shubhankar Borse, Fatih Porikli,
- Abstract summary: We introduce ProLoRA, enabling zero-shot adaptation of parameter-efficient fine-tuning in text-to-image diffusion models.<n>ProLoRA transfers pre-trained low-rank adjustments from a source to a target model without additional training data.
- Score: 48.22550575107633
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce ProLoRA, enabling zero-shot adaptation of parameter-efficient fine-tuning in text-to-image diffusion models. ProLoRA transfers pre-trained low-rank adjustments (e.g., LoRA) from a source to a target model without additional training data. This overcomes the limitations of traditional methods that require retraining when switching base models, often challenging due to data constraints. ProLoRA achieves this via projection of source adjustments into the target model's weight space, leveraging subspace and null space similarities and selectively targeting aligned layers. Evaluations on established text-to-image models demonstrate successful knowledge transfer and comparable performance without retraining.
Related papers
- T-LoRA: Single Image Diffusion Model Customization Without Overfitting [2.424910201171407]
This paper tackles the challenging yet most impactful task of adapting a diffusion model using just a single concept image.<n>We introduce T-LoRA, a Timestep-Dependent Low-Rank Adaptation framework specifically designed for diffusion model personalization.<n>We show that higher diffusion timesteps are more prone to overfitting than lower ones, necessitating a timestep-sensitive fine-tuning strategy.
arXiv Detail & Related papers (2025-07-08T13:14:10Z) - Unsupervised Parameter Efficient Source-free Post-pretraining [52.27955794126508]
We introduce UpStep, an Unsupervised.<n>Source-free post-pretraining approach to adapt a base model from a source domain to a target domain.<n>We use various general backbone architectures, both supervised and unsupervised, trained on Imagenet as our base model.
arXiv Detail & Related papers (2025-02-28T18:54:51Z) - LoRA-X: Bridging Foundation Models with Training-Free Cross-Model Adaptation [48.22550575107633]
A new adapter, Cross-Model Low-Rank Adaptation (LoRA-X), enables the training-free transfer of LoRA parameters across source and target models.<n>Our experiments demonstrate the effectiveness of LoRA-X for text-to-image generation.
arXiv Detail & Related papers (2025-01-27T23:02:24Z) - LoRA Diffusion: Zero-Shot LoRA Synthesis for Diffusion Model Personalization [0.0]
Low-Rank Adaptation (LoRA) and other parameter-efficient fine-tuning (PEFT) methods provide low-memory, storage-efficient solutions for personalizing text-to-image models.<n>We show that training a hypernetwork model to generate LoRA weights can achieve competitive quality for specific domains.
arXiv Detail & Related papers (2024-12-03T10:17:15Z) - Learning on LoRAs: GL-Equivariant Processing of Low-Rank Weight Spaces for Large Finetuned Models [38.197552424549514]
Low-rank adaptations (LoRAs) have revolutionized the finetuning of large foundation models.
LoRAs present opportunities for applying machine learning techniques that take these low-rank weights themselves as inputs.
In this paper, we investigate the potential of Learning on LoRAs (LoL), a paradigm where LoRA weights serve as input to machine learning models.
arXiv Detail & Related papers (2024-10-05T15:52:47Z) - AutoLoRA: AutoGuidance Meets Low-Rank Adaptation for Diffusion Models [0.9514837871243403]
Low-rank adaptation (LoRA) is a fine-tuning technique that can be applied to conditional generative diffusion models.
We introduce AutoLoRA, a novel guidance technique for diffusion models fine-tuned with the LoRA approach.
arXiv Detail & Related papers (2024-10-04T21:57:11Z) - SaRA: High-Efficient Diffusion Model Fine-tuning with Progressive Sparse Low-Rank Adaptation [52.6922833948127]
In this work, we investigate the importance of parameters in pre-trained diffusion models.<n>We propose a novel model fine-tuning method to make full use of these ineffective parameters.<n>Our method enhances the generative capabilities of pre-trained models in downstream applications.
arXiv Detail & Related papers (2024-09-10T16:44:47Z) - SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models [85.67096251281191]
We present an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction.
SMILE allows for the upscaling of source models into an MoE model without extra data or further training.
We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2024-08-19T17:32:15Z) - Few Shot Generative Model Adaption via Relaxed Spatial Structural
Alignment [130.84010267004803]
Training a generative adversarial network (GAN) with limited data has been a challenging task.
A feasible solution is to start with a GAN well-trained on a large scale source domain and adapt it to the target domain with a few samples, termed as few shot generative model adaption.
We propose a relaxed spatial structural alignment method to calibrate the target generative models during the adaption.
arXiv Detail & Related papers (2022-03-06T14:26:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.