TuckA: Hierarchical Compact Tensor Experts for Efficient Fine-Tuning
- URL: http://arxiv.org/abs/2511.06859v1
- Date: Mon, 10 Nov 2025 09:03:16 GMT
- Title: TuckA: Hierarchical Compact Tensor Experts for Efficient Fine-Tuning
- Authors: Qifeng Lei, Zhiyong Yang, Qianqian Xu, Cong Hua, Peisong Wen, Qingming Huang,
- Abstract summary: We introduce Tucker Adaptation (TuckA), a method with four key properties.<n>We develop an efficient batch-level routing mechanism, which reduces the router's parameter size by a factor of $L$.<n>Experiments on benchmarks in natural language understanding, image classification, and mathematical reasoning speak to the efficacy of TuckA.
- Score: 83.93651411533533
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Efficiently fine-tuning pre-trained models for downstream tasks is a key challenge in the era of foundation models. Parameter-efficient fine-tuning (PEFT) presents a promising solution, achieving performance comparable to full fine-tuning by updating only a small number of adaptation weights per layer. Traditional PEFT methods typically rely on a single expert, where the adaptation weight is a low-rank matrix. However, for complex tasks, the data's inherent diversity poses a significant challenge for such models, as a single adaptation weight cannot adequately capture the features of all samples. To address this limitation, we explore how to integrate multiple small adaptation experts into a compact structure to defeat a large adapter. Specifically, we propose Tucker Adaptation (TuckA), a method with four key properties: (i) We use Tucker decomposition to create a compact 3D tensor where each slice naturally serves as an expert. The low-rank nature of this decomposition ensures that the number of parameters scales efficiently as more experts are added. (ii) We introduce a hierarchical strategy that organizes these experts into groups at different granularities, allowing the model to capture both local and global data patterns. (iii) We develop an efficient batch-level routing mechanism, which reduces the router's parameter size by a factor of $L$ compared to routing at every adapted layer (where $L$ is the number of adapted layers) (iv) We propose data-aware initialization to achieve loss-free expert load balancing based on theoretical analysis. Extensive experiments on benchmarks in natural language understanding, image classification, and mathematical reasoning speak to the efficacy of TuckA, offering a new and effective solution to the PEFT problem.
Related papers
- CoSA: Compressed Sensing-Based Adaptation of Large Language Models [21.688889188355645]
CoSA (Compressed Sensing-Based Adaptation) is a new PEFT method extended from compressed sensing theory.<n>We show that CoSA provides a principled perspective for efficient and expressive multi-scale model adaptation.<n>We evaluate CoSA on 10 diverse tasks, including natural language understanding and generation, employing 5 models of different scales from RoBERTa, Llama, and Qwen families.
arXiv Detail & Related papers (2026-02-05T00:11:43Z) - High-Rank Structured Modulation for Parameter-Efficient Fine-Tuning [57.85676271833619]
Low-rank Adaptation (LoRA) uses a low-rank update method to simulate full parameter fine-tuning.<n>We present textbfSMoA, a high-rank textbfStructured textbfMOdulation textbfAdapter that uses fewer trainable parameters while maintaining a higher rank.
arXiv Detail & Related papers (2026-01-12T13:06:17Z) - A Survey on Parameter-Efficient Fine-Tuning for Foundation Models in Federated Learning [5.280048850098648]
Foundation models have revolutionized artificial intelligence by providing robust, versatile architectures pre-trained on large-scale datasets.<n>Adapting these massive models to specific downstream tasks requires fine-tuning, which can be prohibitively expensive in computational resources.<n>This survey provides a comprehensive review of the integration of PEFT techniques within federated learning environments.
arXiv Detail & Related papers (2025-04-29T18:18:39Z) - MoLEx: Mixture of Layer Experts for Finetuning with Sparse Upcycling [2.1605931466490795]
Large-scale pre-training of deep models, followed by fine-tuning them, has become the cornerstone of natural language processing (NLP)<n>In this paper, we study layers as extractors of different types of linguistic information that are valuable when used in conjunction.<n>We propose the Mixture of Layer Experts (MoLEx), a novel sparse mixture of experts whose experts are layers in the pre-trained model.
arXiv Detail & Related papers (2025-03-14T07:22:07Z) - ALoRE: Efficient Visual Adaptation via Aggregating Low Rank Experts [71.91042186338163]
ALoRE is a novel PETL method that reuses the hypercomplex parameterized space constructed by Kronecker product to Aggregate Low Rank Experts.<n>Thanks to the artful design, ALoRE maintains negligible extra parameters and can be effortlessly merged into the frozen backbone.
arXiv Detail & Related papers (2024-12-11T12:31:30Z) - RECAST: Reparameterized, Compact weight Adaptation for Sequential Tasks [16.512587987753967]
RECAST is a novel method that dramatically reduces task-specific trainable parameters to fewer than 50.<n>We show that RECAST outperforms the state-of-the-art by up to 3% across various scales, architectures, and parameter spaces.
arXiv Detail & Related papers (2024-11-25T19:08:38Z) - ETHER: Efficient Finetuning of Large-Scale Models with Hyperplane Reflections [59.839926875976225]
We propose the ETHER transformation family, which performs Efficient fineTuning via HypErplane Reflections.
In particular, we introduce ETHER and its relaxation ETHER+, which match or outperform existing PEFT methods with significantly fewer parameters.
arXiv Detail & Related papers (2024-05-30T17:26:02Z) - Parameter-Efficient Fine-Tuning With Adapters [5.948206235442328]
This research introduces a novel adaptation method utilizing the UniPELT framework as a base.
Our method employs adapters, which enable efficient transfer of pretrained models to new tasks with minimal retraining of the base model parameters.
arXiv Detail & Related papers (2024-05-09T01:40:38Z) - Scaling Pre-trained Language Models to Deeper via Parameter-efficient
Architecture [68.13678918660872]
We design a more capable parameter-sharing architecture based on matrix product operator (MPO)
MPO decomposition can reorganize and factorize the information of a parameter matrix into two parts.
Our architecture shares the central tensor across all layers for reducing the model size.
arXiv Detail & Related papers (2023-03-27T02:34:09Z) - BASE Layers: Simplifying Training of Large, Sparse Models [53.98145464002843]
We introduce a new balanced assignment of experts (BASE) layer for large language models.
Sparse layers can dramatically improve the efficiency of training and inference by routing each token to specialized expert modules.
We formulate token-to-expert allocation as a linear assignment problem, allowing an optimal assignment in which each expert receives an equal number of tokens.
arXiv Detail & Related papers (2021-03-30T23:08:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.