ZipLoRA: Any Subject in Any Style by Effectively Merging LoRAs
- URL: http://arxiv.org/abs/2311.13600v1
- Date: Wed, 22 Nov 2023 18:59:36 GMT
- Title: ZipLoRA: Any Subject in Any Style by Effectively Merging LoRAs
- Authors: Viraj Shah, Nataniel Ruiz, Forrester Cole, Erika Lu, Svetlana
Lazebnik, Yuanzhen Li, Varun Jampani
- Abstract summary: Low-rank adaptations (LoRA) have been proposed as a parameter-efficient way of achieving concept-driven personalization.
We propose ZipLoRA, a method to cheaply and effectively merge independently trained style and subject LoRAs.
Experiments show that ZipLoRA can generate compelling results with meaningful improvements over baselines in subject and style fidelity.
- Score: 56.85106417530364
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Methods for finetuning generative models for concept-driven personalization
generally achieve strong results for subject-driven or style-driven generation.
Recently, low-rank adaptations (LoRA) have been proposed as a
parameter-efficient way of achieving concept-driven personalization. While
recent work explores the combination of separate LoRAs to achieve joint
generation of learned styles and subjects, existing techniques do not reliably
address the problem; they often compromise either subject fidelity or style
fidelity. We propose ZipLoRA, a method to cheaply and effectively merge
independently trained style and subject LoRAs in order to achieve generation of
any user-provided subject in any user-provided style. Experiments on a wide
range of subject and style combinations show that ZipLoRA can generate
compelling results with meaningful improvements over baselines in subject and
style fidelity while preserving the ability to recontextualize. Project page:
https://ziplora.github.io
Related papers
- CopRA: A Progressive LoRA Training Strategy [9.847045610578073]
Low-Rank Adaptation (LoRA) is a parameter-efficient technique for fine-tuning foundation models.
In this work, we propose a novel progressive training strategy for LoRA with random layer dropping.
We refer to this method as Cooperative LoRA (CopRA)
arXiv Detail & Related papers (2024-10-30T11:07:09Z) - Exact Aggregation for Federated and Efficient Fine-Tuning of Foundation Models [5.1613368481802455]
Low-Rank Adaptation (LoRA) is a popular technique for efficient fine-tuning of foundation models.
We propose Federated Exact LoRA, or FedEx-LoRA, which adds a residual error term to the pretrained frozen weight matrix.
Our approach achieves exact updates with minimal computational and communication overhead, preserving LoRA's efficiency.
arXiv Detail & Related papers (2024-10-12T08:22:44Z) - DiffLoRA: Generating Personalized Low-Rank Adaptation Weights with Diffusion [42.38655393158855]
We propose DiffLoRA, a novel approach that leverages diffusion models as a hypernetwork to predict personalized low-rank adaptation weights.
By integrating these LoRA weights into the text-to-image model, DiffLoRA achieves personalization during inference without further training.
arXiv Detail & Related papers (2024-08-13T09:00:35Z) - Lifelong Personalized Low-Rank Adaptation of Large Language Models for Recommendation [50.837277466987345]
We focus on the field of large language models (LLMs) for recommendation.
We propose RecLoRA, which incorporates a Personalized LoRA module that maintains independent LoRAs for different users.
We also design a Few2Many Learning Strategy, using a conventional recommendation model as a lens to magnify small training spaces to full spaces.
arXiv Detail & Related papers (2024-08-07T04:20:28Z) - Mixture of LoRA Experts [87.50120181861362]
This paper introduces the Mixture of LoRA Experts (MoLE) approach, which harnesses hierarchical control and unfettered branch selection.
The MoLE approach achieves superior LoRA fusion performance in comparison to direct arithmetic merging.
arXiv Detail & Related papers (2024-04-21T11:59:53Z) - Implicit Style-Content Separation using B-LoRA [61.664293840163865]
We introduce B-LoRA, a method that implicitly separate the style and content components of a single image.
By analyzing the architecture of SDXL combined with LoRA, we find that jointly learning the LoRA weights of two specific blocks achieves style-content separation.
arXiv Detail & Related papers (2024-03-21T17:20:21Z) - Block-wise LoRA: Revisiting Fine-grained LoRA for Effective
Personalization and Stylization in Text-to-Image Generation [2.2356314962198836]
The objective of personalization and stylization in text-to-image is to instruct a pre-trained diffusion model to analyze new concepts introduced by users and incorporate them into expected styles.
We propose block-wise Low-Rank Adaptation (LoRA) to perform fine-grained fine-tuning for different blocks of SD.
arXiv Detail & Related papers (2024-03-12T10:38:03Z) - ResLoRA: Identity Residual Mapping in Low-Rank Adaption [96.59370314485074]
We propose ResLoRA, an improved framework of low-rank adaptation (LoRA)
Our method can achieve better results in fewer training steps without any extra trainable parameters or inference cost compared to LoRA.
The experiments on NLG, NLU, and text-to-image tasks demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2024-02-28T04:33:20Z) - Style Interleaved Learning for Generalizable Person Re-identification [69.03539634477637]
We propose a novel style interleaved learning (IL) framework for DG ReID training.
Unlike conventional learning strategies, IL incorporates two forward propagations and one backward propagation for each iteration.
We show that our model consistently outperforms state-of-the-art methods on large-scale benchmarks for DG ReID.
arXiv Detail & Related papers (2022-07-07T07:41:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.