TenExp: Mixture-of-Experts-Based Tensor Decomposition Structure Search Framework
- URL: http://arxiv.org/abs/2603.02720v1
- Date: Tue, 03 Mar 2026 08:19:31 GMT
- Title: TenExp: Mixture-of-Experts-Based Tensor Decomposition Structure Search Framework
- Authors: Ting-Wei Zhou, Xi-Le Zhao, Sheng Liu, Wei-Hao Wu, Yu-Bang Zheng, Deyu Meng,
- Abstract summary: Current tensor decomposition structure search methods are still confined by a fixed factor-interaction family.<n>We elaborately design a mixture-of-experts-based tensor decomposition structure search framework (termed as TenExp)<n>TenExp allows us to dynamically select and activate suitable tensor decompositions in an unsupervised fashion.
- Score: 68.54772029557186
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, tensor decompositions continue to emerge and receive increasing attention. Selecting a suitable tensor decomposition to exactly capture the low-rank structures behind the data is at the heart of the tensor decomposition field, which remains a challenging and relatively under-explored problem. Current tensor decomposition structure search methods are still confined by a fixed factor-interaction family (e.g., tensor contraction) and cannot deliver the mixture of decompositions. To address this problem, we elaborately design a mixture-of-experts-based tensor decomposition structure search framework (termed as TenExp), which allows us to dynamically select and activate suitable tensor decompositions in an unsupervised fashion. This framework enjoys two unique advantages over the state-of-the-art tensor decomposition structure search methods. Firstly, TenExp can provide a suitable single decomposition beyond a fixed factor-interaction family. Secondly, TenExp can deliver a suitable mixture of decompositions beyond a single decomposition. Theoretically, we also provide the approximation error bound of TenExp, which reveals the approximation capability of TenExp. Extensive experiments on both synthetic and realistic datasets demonstrate the superiority of the proposed TenExp compared to the state-of-the-art tensor decomposition-based methods.
Related papers
- Bayesian Fully-Connected Tensor Network for Hyperspectral-Multispectral Image Fusion [20.64193953092791]
We present the Fully-Connected Network (BFCTN) decomposition method for hyperspectral-multispectral image fusion.<n>BFCTN not only achieves state-of-the-art fusion accuracy and strong robustness but also exhibits practical applicability in complex real-world scenarios.
arXiv Detail & Related papers (2025-10-21T08:19:54Z) - Loss-Complexity Landscape and Model Structure Functions [53.92822954974537]
We develop a framework for dualizing the Kolmogorov structure function $h_x(alpha)$.<n>We establish a mathematical analogy between information-theoretic constructs and statistical mechanics.<n>We explicitly prove the Legendre-Fenchel duality between the structure function and free energy.
arXiv Detail & Related papers (2025-07-17T21:31:45Z) - Score-Based Model for Low-Rank Tensor Recovery [49.158601255093416]
Low-rank tensor decompositions (TDs) provide an effective framework for multiway data analysis.<n>Traditional TD methods rely on predefined structural assumptions, such as CP or Tucker decompositions.<n>We propose a score-based model that eliminates the need for predefined structural or distributional assumptions.
arXiv Detail & Related papers (2025-06-27T15:05:37Z) - Low-Rank Tensor Recovery via Variational Schatten-p Quasi-Norm and Jacobian Regularization [49.85875869048434]
We propose a CP-based low-rank tensor function parameterized by neural networks for implicit neural representation.<n>To achieve sparser CP decomposition, we introduce a variational Schatten-p quasi-norm to prune redundant rank-1 components.<n>For smoothness, we propose a regularization term based on the spectral norm of the Jacobian and Hutchinson's trace estimator.
arXiv Detail & Related papers (2025-06-27T11:23:10Z) - A Scalable Factorization Approach for High-Order Structured Tensor Recovery [30.876260188209105]
decompositions, which represent an $N$-order tensor using approximately $N$ factors of much smaller dimensions, can significantly reduce the number of parameters.<n>A computationally memory-efficient approach to these problems is to optimize directly over factors using local algorithms.<n>We present a unified framework for factorization approach to solving various tensor decomposition problems.
arXiv Detail & Related papers (2025-06-19T05:07:07Z) - A Multi-resolution Low-rank Tensor Decomposition [10.196333441334895]
We propose a multi-resolution low-rank tensor decomposition to describe a tensor in a hierarchical fashion.
The central idea of the decomposition is to recast the tensor into emphmultiple lower-dimensional tensors to exploit the structure at different levels of resolution.
arXiv Detail & Related papers (2024-05-27T19:44:29Z) - Mitigating Heterogeneity among Factor Tensors via Lie Group Manifolds for Tensor Decomposition Based Temporal Knowledge Graph Embedding [15.579069282539502]
We introduce a novel method that maps factor tensors onto a unified smooth Lie group manifold to make the distribution of factor tensors approximating homogeneous in tensor decomposition.<n>The proposed method can be directly integrated into existing tensor decomposition based TKGE methods without introducing extra parameters.
arXiv Detail & Related papers (2024-04-14T06:10:46Z) - Scalable CP Decomposition for Tensor Learning using GPU Tensor Cores [47.87810316745786]
We propose a compression-based tensor decomposition framework, namely the exascale-tensor, to support exascale tensor decomposition.
Compared to the baselines, the exascale-tensor supports 8,000x larger tensors and a speedup up to 6.95x.
We also apply our method to two real-world applications, including gene analysis and tensor layer neural networks.
arXiv Detail & Related papers (2023-11-22T21:04:59Z) - A generalizable framework for low-rank tensor completion with numerical priors [16.3738101631138]
We present the Generalized CP Decomposition Completion (GCDTC) framework, the first generalizable framework for low-rank tensor completion.
We test GCDTC by further proposing the Smooth Poisson Completion (SPTC) algorithm, an instantiation of the GCDTC framework, whose performance exceeds current state-of-the-arts.
arXiv Detail & Related papers (2023-02-12T09:50:32Z) - Error Analysis of Tensor-Train Cross Approximation [88.83467216606778]
We provide accuracy guarantees in terms of the entire tensor for both exact and noisy measurements.
Results are verified by numerical experiments, and may have important implications for the usefulness of cross approximations for high-order tensors.
arXiv Detail & Related papers (2022-07-09T19:33:59Z) - Understanding Deflation Process in Over-parametrized Tensor
Decomposition [17.28303004783945]
We study the training dynamics for gradient flow on over-parametrized tensor decomposition problems.
Empirically, such training process often first fits larger components and then discovers smaller components.
arXiv Detail & Related papers (2021-06-11T18:51:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.