QR-LoRA: Efficient and Disentangled Fine-tuning via QR Decomposition for Customized Generation
- URL: http://arxiv.org/abs/2507.04599v2
- Date: Thu, 24 Jul 2025 09:12:08 GMT
- Title: QR-LoRA: Efficient and Disentangled Fine-tuning via QR Decomposition for Customized Generation
- Authors: Jiahui Yang, Yongjia Ma, Donglin Di, Hao Li, Wei Chen, Yan Xie, Jianxun Cui, Xun Yang, Wangmeng Zuo,
- Abstract summary: We propose QR-LoRA, a novel fine-tuning framework leveraging QR decomposition for structured parameter updates.<n>Our key insight is that the Q matrix naturally minimizes interference between different visual features.<n>Experiments demonstrate that QR-LoRA achieves superior disentanglement in content-style fusion tasks.
- Score: 52.024845354511555
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing text-to-image models often rely on parameter fine-tuning techniques such as Low-Rank Adaptation (LoRA) to customize visual attributes. However, when combining multiple LoRA models for content-style fusion tasks, unstructured modifications of weight matrices often lead to undesired feature entanglement between content and style attributes. We propose QR-LoRA, a novel fine-tuning framework leveraging QR decomposition for structured parameter updates that effectively separate visual attributes. Our key insight is that the orthogonal Q matrix naturally minimizes interference between different visual features, while the upper triangular R matrix efficiently encodes attribute-specific transformations. Our approach fixes both Q and R matrices while only training an additional task-specific $\Delta R$ matrix. This structured design reduces trainable parameters to half of conventional LoRA methods and supports effective merging of multiple adaptations without cross-contamination due to the strong disentanglement properties between $\Delta R$ matrices. Experiments demonstrate that QR-LoRA achieves superior disentanglement in content-style fusion tasks, establishing a new paradigm for parameter-efficient, disentangled fine-tuning in generative models. The project page is available at: https://luna-ai-lab.github.io/QR-LoRA/.
Related papers
- CSMCIR: CoT-Enhanced Symmetric Alignment with Memory Bank for Composed Image Retrieval [54.15776146365823]
Composed Image Retrieval (CIR) enables users to search for target images using both a reference image and manipulation text.<n>We propose CSMCIR, a unified representation framework that achieves efficient query-target alignment through three synergistic components.
arXiv Detail & Related papers (2026-01-07T09:21:38Z) - Element-wise Modulation of Random Matrices for Efficient Neural Layers [0.0]
We propose a novel approach that decouples feature mixing from adaptation by utilizing a fixed random matrix modulated by lightweight, learnable element-wise parameters.<n>This architecture drastically reduces the trainable parameter count to a linear scale while retaining reliable accuracy across various benchmarks.
arXiv Detail & Related papers (2025-12-15T16:16:53Z) - Less is More: Resource-Efficient Low-Rank Adaptation [15.883867662707743]
EffiLoRA is a lightweight and generalizable approach for language, multimodal, and diffusion models.<n>It consistently outperforms LoRA across diverse modalities, including commonsense reasoning, visual instruction tuning, and image generation.
arXiv Detail & Related papers (2025-11-30T12:52:04Z) - HyperAdaLoRA: Accelerating LoRA Rank Allocation During Training via Hypernetworks without Sacrificing Performance [27.391727025825546]
Low-Rank Adaptation (LoRA) has emerged as a promising approach to fine-tuning large language models.<n>We propose HyperAdaLoRA, a novel framework that accelerates the convergence of AdaLoRA by leveraging a hypernetwork.<n>Our method achieves faster convergence without sacrificing performance.
arXiv Detail & Related papers (2025-10-03T00:15:59Z) - DEFT: Decompositional Efficient Fine-Tuning for Text-to-Image Models [103.18486625853099]
DEFT, Decompositional Efficient Fine-Tuning, adapts a pre-trained weight matrix by decomposing its update into two components.<n>We conduct experiments on the Dreambooth and Dreambench Plus datasets for personalization, the InsDet dataset for object and scene adaptation, and the VisualCloze dataset for a universal image generation framework.
arXiv Detail & Related papers (2025-09-26T18:01:15Z) - QR-LoRA: QR-Based Low-Rank Adaptation for Efficient Fine-Tuning of Large Language Models [0.0]
Low-Rank Adaptation (LoRA) is a technique for reducing the number of trainable parameters by applying low-rank updates to pretrained weights.<n>We show that QR-LoRA matches or exceeds the performance of full fine-tuning, standard LoRA, and SVD-LoRA.
arXiv Detail & Related papers (2025-08-29T17:47:27Z) - Uni-LoRA: One Vector is All You Need [13.938834666101679]
Low-Rank Adaptation (LoRA) has become the de facto parameter-efficient fine-tuning (PEFT) method for large language models.<n>In this paper, we show that the parameter space reduction strategies employed by these LoRA variants can be formulated within a unified framework.<n>Under the unified view of Uni-LoRA, this design requires only a single trainable vector to reconstruct LoRA parameters for the entire LLM.
arXiv Detail & Related papers (2025-06-01T03:00:09Z) - TLoRA: Tri-Matrix Low-Rank Adaptation of Large Language Models [0.135975510645475]
TLoRA is a novel tri-matrix low-rank adaptation method.<n>We show that TLoRA achieves comparable performance to existing low-rank methods.
arXiv Detail & Related papers (2025-04-25T23:11:10Z) - Reinforced Model Merging [53.84354455400038]
We present an innovative framework termed Reinforced Model Merging (RMM), which encompasses an environment and agent tailored for merging tasks.<n>By utilizing data subsets during the evaluation process, we addressed the bottleneck in the reward feedback phase, thereby accelerating RMM by up to 100 times.
arXiv Detail & Related papers (2025-03-27T08:52:41Z) - MSPLoRA: A Multi-Scale Pyramid Low-Rank Adaptation for Efficient Model Fine-Tuning [5.412348391086257]
We propose MSPLoRA, which introduces Global Shared LoRA, Mid-Level Shared LoRA, and Layer-Specific LoRA to capture global patterns, mid-level features, and fine-grained information.<n> Experiments on various NLP tasks demonstrate that MSPLoRA achieves more efficient adaptation and better performance while significantly reducing the number of trainable parameters.
arXiv Detail & Related papers (2025-03-27T07:01:50Z) - In-Context Meta LoRA Generation [61.690065588534296]
Low-rank Adaptation (LoRA) has demonstrated remarkable capabilities for task specific fine-tuning.<n>We propose In-Context Meta LoRA (ICM-LoRA), a novel approach that efficiently achieves task-specific customization of large language models.<n>ICM-LoRA enables more accurate LoRA parameter reconstruction than current parameter reconstruction methods.
arXiv Detail & Related papers (2025-01-29T13:12:01Z) - VELoRA: A Low-Rank Adaptation Approach for Efficient RGB-Event based Recognition [54.27379947727035]
This paper proposes a novel PEFT strategy to adapt the pre-trained foundation vision models for the RGB-Event-based classification.<n>The frame difference of the dual modalities is also considered to capture the motion cues via the frame difference backbone network.<n>The source code and pre-trained models will be released on urlhttps://github.com/Event-AHU/VELoRA.
arXiv Detail & Related papers (2024-12-28T07:38:23Z) - LoRTA: Low Rank Tensor Adaptation of Large Language Models [70.32218116940393]
Low Rank Adaptation (LoRA) is a popular Efficient Fine Tuning (PEFT) method.<n>We propose a higher-order Candecomp/Parafac (CP) decomposition, enabling a more compact and flexible representation.<n>Our method can achieve a reduction in the number of parameters while maintaining comparable performance.
arXiv Detail & Related papers (2024-10-05T06:59:50Z) - CURLoRA: Stable LLM Continual Fine-Tuning and Catastrophic Forgetting Mitigation [0.0]
CURLoRA is a novel approach to fine-tuning large language models.
It mitigates catastrophic forgetting and reduces the number of trainable parameters.
It maintains model stability and performance across tasks while significantly reducing the number of trainable parameters.
arXiv Detail & Related papers (2024-08-26T18:42:59Z) - Structured Unrestricted-Rank Matrices for Parameter Efficient Fine-tuning [38.80020737321214]
We propose a framework for efficient parameter fine-tuning (PEFT) based on structured unrestricted-rank matrices (SURM)<n>SURMs achieve 5-7% accuracy gains on various image classification tasks while replacing low-rank matrices in LoRA.<n>It also results in up to 12x reduction of the number of parameters in adapters (with virtually no loss in quality) on the GLUE benchmark.
arXiv Detail & Related papers (2024-06-25T17:26:05Z) - LoTR: Low Tensor Rank Weight Adaptation [47.4904143988667]
We introduce LoTR, a novel approach for parameter-efficient fine-tuning of large language models (LLMs)
LoTR represents a gradient update to parameters in a form of tensor decomposition.
Simultaneous compression of a sequence of layers with low-rank tensor representation allows LoTR to archive even better parameter efficiency then LoRA especially for deep models.
arXiv Detail & Related papers (2024-02-02T13:00:38Z) - LQ-LoRA: Low-rank Plus Quantized Matrix Decomposition for Efficient Language Model Finetuning [66.85589263870702]
Our approach uses an iterative algorithm to decompose each pretrained matrix into a high-precision low-rank component and a memory-efficient quantized component.
Experiments on finetuning RoBERTa and LLaMA-2 demonstrate that our low-rank plus quantized matrix decomposition approach (LQ-LoRA) outperforms strong QLoRA and GPTQ-LoRA baselines.
arXiv Detail & Related papers (2023-11-20T18:57:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.