HiLoRA: Hierarchical Low-Rank Adaptation for Personalized Federated Learning
- URL: http://arxiv.org/abs/2603.02785v1
- Date: Tue, 03 Mar 2026 09:25:16 GMT
- Title: HiLoRA: Hierarchical Low-Rank Adaptation for Personalized Federated Learning
- Authors: Zihao Peng, Nan Zou, Jiandian Zeng, Guo Li, Ke Chen, Boyuan Li, Tian Wang,
- Abstract summary: Low-Rank Adaptation (LoRA) provides efficient and communication-friendly way to adapt Vision Transformers (ViTs)<n>We propose HiLoRA, a hierarchical LoRA framework that places adapters at three levels: root, cluster, and leaf.<n>We develop a LoRA-Subspace Adaptive Clustering mechanism that infers latent client groups via subspace similarity analysis.
- Score: 11.466314810697169
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vision Transformers (ViTs) have been widely adopted in vision tasks due to their strong transferability. In Federated Learning (FL), where full fine-tuning is communication heavy, Low-Rank Adaptation (LoRA) provides an efficient and communication-friendly way to adapt ViTs. However, existing LoRA-based federated tuning methods overlook latent client structures in real-world settings, limiting shared representation learning and hindering effective adaptation to unseen clients. To address this, we propose HiLoRA, a hierarchical LoRA framework that places adapters at three levels: root, cluster, and leaf, each designed to capture global, subgroup, and client-specific knowledge, respectively. Through cross-tier orthogonality and cascaded optimization, HiLoRA separates update subspaces and aligns each tier with its residual personalized objective. In particular, we develop a LoRA-Subspace Adaptive Clustering mechanism that infers latent client groups via subspace similarity analysis, thereby facilitating knowledge sharing across structurally aligned clients. Theoretically, we establish a tier-wise generalization analysis that supports HiLoRA's design. Experiments on ViT backbones with CIFAR-100 and DomainNet demonstrate consistent improvements in both personalization and generalization.
Related papers
- Replacing Parameters with Preferences: Federated Alignment of Heterogeneous Vision-Language Models [63.70401095689976]
We argue that replacing parameters with preferences represents a more scalable and privacy-preserving future.<n>We propose MoR, a federated alignment framework based on GRPO with Mixture-of-Rewards for heterogeneous VLMs.<n>MoR consistently outperforms federated alignment baselines in generalization, robustness, and cross-client adaptability.
arXiv Detail & Related papers (2026-01-31T03:11:51Z) - SDFLoRA: Selective Dual-Module LoRA for Federated Fine-tuning with Heterogeneous Clients [4.862708813950415]
Federated learning for large language models (LLMs) has attracted increasing attention as a way to enable privacy-preserving adaptation over distributed data.<n>We propose Selective Dual-module Federated LoRA (SDFLoRA), which decomposes each client into a global module that captures transferable knowledge and a local module that preserves client-specific adaptations.<n> Experiments on GLUE benchmarks demonstrate that SDFLoRA outperforms representative federated LoRA baselines and achieves a better utility-privacy trade-off.
arXiv Detail & Related papers (2026-01-16T11:53:38Z) - HiLoRA: Adaptive Hierarchical LoRA Routing for Training-Free Domain Generalization [39.23407996213986]
Low-Rank Adaptation (LoRA) has emerged as a widely used technique for adapting large language models to new domains.<n>Existing methods often rely on explicit task labels or additional training, which are impractical for deployment.<n>We propose textttHiLoRA, a training-free framework that performs adaptive hierarchical routing over LoRA pools.
arXiv Detail & Related papers (2025-10-14T08:19:13Z) - FedVLM: Scalable Personalized Vision-Language Models through Federated Learning [3.2948524285562866]
Vision-language models (VLMs) demonstrate impressive zero-shot and few-shot learning capabilities.<n>Fine-tuning these models at scale remains challenging in federated environments where data is decentralized and non-iid across clients.<n>We propose FedVLM, a federated LoRA fine-tuning framework that enables decentralized adaptation of VLMs while preserving model privacy.
arXiv Detail & Related papers (2025-07-23T00:05:02Z) - Fed-HeLLo: Efficient Federated Foundation Model Fine-Tuning with Heterogeneous LoRA Allocation [11.10244162253018]
Federated Learning has recently been utilized to collaboratively fine-tune foundation models across multiple clients.<n>Most existing methods do not account for the heterogeneous resources of clients or lack an effective local training strategy.<n>We propose Fed-HeLLo, a novel federated LoRA-based fine-tuning framework that enables clients to collaboratively fine-tune an FM with different local trainable LoRA layers.
arXiv Detail & Related papers (2025-06-13T20:31:17Z) - Federated Sketching LoRA: A Flexible Framework for Heterogeneous Collaborative Fine-Tuning of LLMs [37.03583502049329]
Fine-tuning large language models (LLMs) on resource-constrained clients remains a challenging problem.<n>Recent works have fused low-rank adaptation (LoRA) techniques with federated fine-tuning to mitigate challenges associated with client model sizes and data scarcity.<n>We propose federated sketching LoRA, which leverages a sketching mechanism to enable clients to update submatrices of global LoRA modules maintained by the server.
arXiv Detail & Related papers (2025-01-31T18:44:35Z) - Unlocking Tuning-Free Few-Shot Adaptability in Visual Foundation Models by Recycling Pre-Tuned LoRAs [76.40876036912537]
Large Language Models (LLMs) demonstrate strong few-shot adaptability without requiring fine-tuning.<n>Current Visual Foundation Models (VFMs) require explicit fine-tuning with sufficient tuning data.<n>We propose a framework, LoRA Recycle, that distills a meta-LoRA from diverse pre-tuned LoRAs with a meta-learning objective.
arXiv Detail & Related papers (2024-12-03T07:25:30Z) - Retrieval-Augmented Mixture of LoRA Experts for Uploadable Machine Learning [57.36978335727009]
Low-Rank Adaptation (LoRA) offers an efficient way to fine-tune large language models (LLMs)
In this paper, we propose a framework that adaptively retrieves and composes multiple LoRAs based on input prompts.
arXiv Detail & Related papers (2024-06-24T05:24:41Z) - FedRA: A Random Allocation Strategy for Federated Tuning to Unleash the
Power of Heterogeneous Clients [50.13097183691517]
In real-world federated scenarios, there often exist a multitude of heterogeneous clients with varying computation and communication resources.
We propose a novel federated tuning algorithm, FedRA.
In each communication round, FedRA randomly generates an allocation matrix.
It reorganizes a small number of layers from the original model based on the allocation matrix and fine-tunes using adapters.
arXiv Detail & Related papers (2023-11-19T04:43:16Z) - Unlocking the Potential of Prompt-Tuning in Bridging Generalized and
Personalized Federated Learning [49.72857433721424]
Vision Transformers (ViT) and Visual Prompt Tuning (VPT) achieve state-of-the-art performance with improved efficiency in various computer vision tasks.
We present a novel algorithm, SGPT, that integrates Generalized FL (GFL) and Personalized FL (PFL) approaches by employing a unique combination of both shared and group-specific prompts.
arXiv Detail & Related papers (2023-10-27T17:22:09Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM [62.62684911017472]
Federated learning (FL) enables devices to jointly train shared models while keeping the training data local for privacy purposes.
We introduce a VFL framework with multiple heads (VIM), which takes the separate contribution of each client into account.
VIM achieves significantly higher performance and faster convergence compared with the state-of-the-art.
arXiv Detail & Related papers (2022-07-20T23:14:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.