NeuMatC: A General Neural Framework for Fast Parametric Matrix Operation
- URL: http://arxiv.org/abs/2511.22934v1
- Date: Fri, 28 Nov 2025 07:21:17 GMT
- Title: NeuMatC: A General Neural Framework for Fast Parametric Matrix Operation
- Authors: Chuan Wang, Xi-le Zhao, Zhilong Han, Liang Li, Deyu Meng, Michael K. Ng,
- Abstract summary: We propose textbftextitNeural Matrix Computation Framework (NeuMatC), which elegantly tackles general parametric matrix operation tasks.<n>NeuMatC unsupervisedly learns a low-rank and continuous mapping from parameters to their corresponding matrix operation results.<n> Experimental results on both synthetic and real-world datasets demonstrate the promising performance of NeuMatC.
- Score: 75.91285900600549
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Matrix operations (e.g., inversion and singular value decomposition (SVD)) are fundamental in science and engineering. In many emerging real-world applications (such as wireless communication and signal processing), these operations must be performed repeatedly over matrices with parameters varying continuously. However, conventional methods tackle each matrix operation independently, underexploring the inherent low-rankness and continuity along the parameter dimension, resulting in significantly redundant computation. To address this challenge, we propose \textbf{\textit{Neural Matrix Computation Framework} (NeuMatC)}, which elegantly tackles general parametric matrix operation tasks by leveraging the underlying low-rankness and continuity along the parameter dimension. Specifically, NeuMatC unsupervisedly learns a low-rank and continuous mapping from parameters to their corresponding matrix operation results. Once trained, NeuMatC enables efficient computations at arbitrary parameters using only a few basic operations (e.g., matrix multiplications and nonlinear activations), significantly reducing redundant computations. Experimental results on both synthetic and real-world datasets demonstrate the promising performance of NeuMatC, exemplified by over $3\times$ speedup in parametric inversion and $10\times$ speedup in parametric SVD compared to the widely used NumPy baseline in wireless communication, while maintaining acceptable accuracy.
Related papers
- Explicit Discovery of Nonlinear Symmetries from Dynamic Data [50.20526548924647]
LieNLSD is the first method capable of determining the number of infinitesimal generators with nonlinear terms and their explicit expressions.<n>LieNLSD shows qualitative advantages over existing methods and improves the long rollout accuracy of neural PDE solvers by over 20%.
arXiv Detail & Related papers (2025-10-02T09:54:08Z) - Scaling Probabilistic Circuits via Monarch Matrices [109.65822339230853]
Probabilistic Circuits (PCs) are tractable representations of probability distributions.<n>We propose a novel sparse and structured parameterization for the sum blocks in PCs.
arXiv Detail & Related papers (2025-06-14T07:39:15Z) - Low-Bit Integerization of Vision Transformers using Operand Reordering for Efficient Hardware [0.7136205674624813]
We analyze the computation graph and propose an integerization process based on operation reordering.<n>This enables integerized matrix multiplication and linear module by directly processing the quantized input.<n> Experimental results show that our low-bit inference reduces per-PE power consumption for linear layer and matrix multiplication.
arXiv Detail & Related papers (2025-04-11T16:09:54Z) - Generalized Tensor-based Parameter-Efficient Fine-Tuning via Lie Group Transformations [50.010924231754856]
Adapting pre-trained foundation models for diverse downstream tasks is a core practice in artificial intelligence.<n>To overcome this, parameter-efficient fine-tuning (PEFT) methods like LoRA have emerged and are becoming a growing research focus.<n>We propose a generalization that extends matrix-based PEFT methods to higher-dimensional parameter spaces without compromising their structural properties.
arXiv Detail & Related papers (2025-04-01T14:36:45Z) - ADC/DAC-Free Analog Acceleration of Deep Neural Networks with Frequency
Transformation [2.7488316163114823]
This paper proposes a novel approach to an energy-efficient acceleration of frequency-domain neural networks by utilizing analog-domain frequency-based tensor transformations.
Our approach achieves more compact cells by eliminating the need for trainable parameters in the transformation matrix.
On a 16$times$16 crossbars, for 8-bit input processing, the proposed approach achieves the energy efficiency of 1602 tera operations per second per Watt.
arXiv Detail & Related papers (2023-09-04T19:19:39Z) - Optimized Sparse Matrix Operations for Reverse Mode Automatic
Differentiation [3.72826300260966]
We present an implementation of a CSR-based sparse matrix wrapper for PyTorch with acceleration for basic matrix operations, as well as automatic differentiability.
We also present several applications of the resulting sparse kernels to optimization problems, demonstrating ease of implementation and performance measurements versus their dense counterparts.
arXiv Detail & Related papers (2022-12-10T00:46:51Z) - Fast Differentiable Matrix Square Root and Inverse Square Root [65.67315418971688]
We propose two more efficient variants to compute the differentiable matrix square root and the inverse square root.
For the forward propagation, one method is to use Matrix Taylor Polynomial (MTP), and the other method is to use Matrix Pad'e Approximants (MPA)
A series of numerical tests show that both methods yield considerable speed-up compared with the SVD or the NS iteration.
arXiv Detail & Related papers (2022-01-29T10:00:35Z) - Fast Differentiable Matrix Square Root [65.67315418971688]
We propose two more efficient variants to compute the differentiable matrix square root.
For the forward propagation, one method is to use Matrix Taylor Polynomial (MTP)
The other method is to use Matrix Pad'e Approximants (MPA)
arXiv Detail & Related papers (2022-01-21T12:18:06Z) - Neural networks with trainable matrix activation functions [7.999703756441757]
This work develops a systematic approach to constructing matrix-valued activation functions.
The proposed activation functions depend on parameters that are trained along with the weights and bias vectors.
arXiv Detail & Related papers (2021-09-21T04:11:26Z) - Parallelized Computation and Backpropagation Under Angle-Parametrized
Orthogonal Matrices [0.0]
We show how an apparently sequential elementary rotation parametrization can be restructured into blocks of commutative operations.
We discuss parametric restrictions of interest to generative modeling and present promising performance results with a prototype GPU implementation.
arXiv Detail & Related papers (2021-05-30T00:47:03Z) - Fast and Accurate Pseudoinverse with Sparse Matrix Reordering and
Incremental Approach [4.710916891482697]
A pseudoinverse is a generalization of a matrix inverse, which has been extensively utilized in machine learning.
FastPI is a novel incremental singular value decomposition (SVD) based pseudoinverse method for sparse matrices.
We show that FastPI computes the pseudoinverse faster than other approximate methods without loss of accuracy.
arXiv Detail & Related papers (2020-11-09T07:47:10Z) - A Scalable, Adaptive and Sound Nonconvex Regularizer for Low-rank Matrix
Completion [60.52730146391456]
We propose a new non scalable low-rank regularizer called "nuclear Frobenius norm" regularizer, which is adaptive and sound.
It bypasses the computation of singular values and allows fast optimization by algorithms.
It obtains state-of-the-art recovery performance while being the fastest in existing matrix learning methods.
arXiv Detail & Related papers (2020-08-14T18:47:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.