SEQR: Secure and Efficient QR-based LoRA Routing
- URL: http://arxiv.org/abs/2509.18093v1
- Date: Mon, 22 Sep 2025 17:59:38 GMT
- Title: SEQR: Secure and Efficient QR-based LoRA Routing
- Authors: William Fleshman, Benjamin Van Durme,
- Abstract summary: Low-Rank Adaptation (LoRA) has become a standard technique for parameter-efficient fine-tuning of large language models.<n> Efficiently selecting the correct LoRA adapter for a given input remains a challenge.<n>We introduce SEQR, an unsupervised LoRA routing algorithm designed to maximize efficiency while providing strict routing guarantees.
- Score: 53.52716967527183
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Low-Rank Adaptation (LoRA) has become a standard technique for parameter-efficient fine-tuning of large language models, enabling large libraries of LoRAs, each for a specific task or domain. Efficiently selecting the correct LoRA adapter for a given input remains a challenge, particularly in secure environments where supervised training of routers may raise privacy concerns. Motivated by previous approaches, we formalize the goal of unsupervised LoRA routing in terms of activation norm maximization, providing a theoretical framework for analysis. We demonstrate the discriminative power of activation norms and introduce SEQR, an unsupervised LoRA routing algorithm designed to maximize efficiency while providing strict routing guarantees. SEQR provably identifies the norm-maximizing adapter with significantly greater efficiency, making it a highly scalable and effective solution for dynamic LoRA composition. We validate our results through experiments that demonstrate improved multi-task performance and efficiency.
Related papers
- RISER: Orchestrating Latent Reasoning Skills for Adaptive Activation Steering [62.63376387138257]
We propose a plug-and-play intervention framework that adaptively steers large language models (LLMs) reasoning in activation space.<n>RISER constructs a library of reusable reasoning vectors and employs a lightweight Router to dynamically compose them for each input.<n>The Router is optimized via reinforcement learning under task-level rewards, activating latent cognitive primitives in an emergent and compositional manner.
arXiv Detail & Related papers (2026-01-14T08:04:33Z) - Parameter-Efficient Fine-Tuning for HAR: Integrating LoRA and QLoRA into Transformer Models [0.2939891130492345]
Low-Rank Adaptation (LoRA) and Quantized LoRA are investigated as scalable alternatives to full model fine-tuning for Human Activity Recognition.<n>LoRA maintains robust performance even under limited supervision.<n>QLoRA extends these benefits by reducing the memory footprint of frozen weights through quantization.
arXiv Detail & Related papers (2025-12-19T14:12:43Z) - Semantic-guided LoRA Parameters Generation [22.648880814012184]
Low-Rank Adaptation (LoRA) has demonstrated strong generalization capabilities across a variety of tasks for efficiently fine-tuning AI models.<n>SG-LoRA is the first of its kind framework to efficiently produce user-specific LoRA without additional training on user tasks or access to user-specific data.<n>SG-LoRA enables the real-time construction of LoRA models aligned with individual intents by distilling knowledge from prominent LoRA experts.
arXiv Detail & Related papers (2025-09-05T14:43:41Z) - L1RA: Dynamic Rank Assignment in LoRA Fine-Tuning [0.09799637101641147]
We introduce L1RA, a technique for dynamically distributing the rank of low-rank adapters during fine-tuning using LoRA.<n>We empirically demonstrate that L1RA maintains comparable or even reduced computational overhead compared to other LoRA variants.
arXiv Detail & Related papers (2025-09-05T08:03:01Z) - Beyond Low-Rank Tuning: Model Prior-Guided Rank Allocation for Effective Transfer in Low-Data and Large-Gap Regimes [9.4848188271008]
Low-Rank Adaptation (LoRA) has proven effective in reducing computational costs while maintaining performance comparable to fully fine-tuned foundation models.<n>Current adaptive LoRA methods attempt to overcome this limitation by dynamically expanding or selectively allocating ranks.<n>We introduce Stable Rank-Guided Low-Rank Adaptation (SR-LoRA), a novel framework that utilizes the stable rank of pre-trained weight matrices as a natural prior for layer-wise rank allocation.
arXiv Detail & Related papers (2025-06-30T23:54:23Z) - LoRA-Gen: Specializing Large Language Model via Online LoRA Generation [68.01864057372067]
We propose the LoRA-Gen framework to generate LoRA parameters for edge-side models based on task descriptions.<n>We merge the LoRA parameters into the edge-side model to achieve flexible specialization.<n>Our method facilitates knowledge transfer between models while significantly improving the inference efficiency of the specialized model.
arXiv Detail & Related papers (2025-06-13T10:11:01Z) - C-LoRA: Continual Low-Rank Adaptation for Pre-trained Models [26.560293264523903]
Low-Rank Adaptation (LoRA) is an efficient fine-tuning method that has been extensively applied in areas such as natural language processing and computer vision.<n>We propose Continual Low-Rank Adaptation (C-LoRA), a novel extension of LoRA for continual learning.<n>C-LoRA uses a learnable routing matrix to dynamically manage parameter updates across tasks.
arXiv Detail & Related papers (2025-02-25T07:35:36Z) - BeamLoRA: Beam-Constraint Low-Rank Adaptation [51.52097743781401]
Low-Rank Adaptation (LoRA) has been widely adopted as one of the most effective parameter-efficient fine-tuning methods.<n>We propose BeamLoRA, which conceptualizes each LoRA module as a beam where each rank naturally corresponds to a potential sub-solution.
arXiv Detail & Related papers (2025-02-19T10:33:22Z) - Dynamic Adaptation of LoRA Fine-Tuning for Efficient and Task-Specific Optimization of Large Language Models [0.7421845364041001]
This paper presents a novel methodology of fine-tuning for large language models-dynamic LoRA.<n>It adds dynamic adaptation mechanisms to improve efficiency and performance.<n>The efficiency of the dynamic LoRA was validated in experiments on benchmark datasets.
arXiv Detail & Related papers (2025-01-24T18:54:14Z) - Less is More: Extreme Gradient Boost Rank-1 Adaption for Efficient Finetuning of LLMs [75.11449420928139]
Fine-tuning Large Language Models (LLMs) has become a crucial technique for adapting pre-trained models to downstream tasks.
Low-Rank Adaptation (LoRA) has emerged as a promising solution, but there exists a gap between the practical performance of low-rank adaptations and its theoretical optimum.
We propose eXtreme Gradient Boosting LoRA, a novel framework that bridges this gap by leveraging the power of ensemble learning.
arXiv Detail & Related papers (2024-10-25T17:07:13Z) - Task-Specific Directions: Definition, Exploration, and Utilization in Parameter Efficient Fine-Tuning [65.31677646659895]
Large language models demonstrate impressive performance on downstream tasks, yet they require extensive resource consumption when fully fine-tuning all parameters.<n>We propose a framework to clearly define task-specific directions (TSDs) and explore their properties and practical utilization challenges.<n>We then introduce a novel approach, LoRA-Dash, which aims to maximize the impact of TSDs during the fine-tuning process.
arXiv Detail & Related papers (2024-09-02T08:10:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.