Recursive Sketched Interpolation: Efficient Hadamard Products of Tensor Trains
- URL: http://arxiv.org/abs/2602.17974v1
- Date: Fri, 20 Feb 2026 04:07:37 GMT
- Title: Recursive Sketched Interpolation: Efficient Hadamard Products of Tensor Trains
- Authors: Zhaonan Meng, Yuehaw Khoo, Jiajia Li, E. Miles Stoudenmire,
- Abstract summary: The Hadamard product of two tensors in the tensor-train (TT) format is a fundamental operation across various applications.<n>We introduce Recursive Sketched Interpolation (RSI), a scale product'' algorithm that computes the Hadamard product of TTs at a computational cost.
- Score: 3.1007879499686957
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The Hadamard product of two tensors in the tensor-train (TT) format is a fundamental operation across various applications, such as TT-based function multiplication for nonlinear differential equations or convolutions. However, conventional methods for computing this product typically scale as at least $\mathcal{O}(χ^4)$ with respect to the TT bond dimension (TT-rank) $χ$, creating a severe computational bottleneck in practice. By combining randomized tensor-train sketching with slice selection via interpolative decomposition, we introduce Recursive Sketched Interpolation (RSI), a ``scale product'' algorithm that computes the Hadamard product of TTs at a computational cost of $\mathcal{O}(χ^3)$. Benchmarks across various TT scenarios demonstrate that RSI offers superior scalability compared to traditional methods while maintaining comparable accuracy. We generalize RSI to compute more complex operations, including Hadamard products of multiple TTs and other element-wise nonlinear mappings, without increasing the complexity beyond $\mathcal{O}(χ^3)$.
Related papers
- Block encoding of sparse matrices with a periodic diagonal structure [67.45502291821956]
We provide an explicit quantum circuit for block encoding a sparse matrix with a periodic diagonal structure.<n>Various applications for the presented methodology are discussed in the context of solving differential problems.
arXiv Detail & Related papers (2026-02-11T07:24:33Z) - Approximation and learning with compositional tensor trains [0.6089496237595778]
We introduce compositional tensor trains (CTTs) for the approximation of multivariate functions.<n>CTTs combine the expressivity of compositional models with the algorithmic efficiency of tensor algebra.
arXiv Detail & Related papers (2025-12-19T20:59:48Z) - Polar Separable Transform for Efficient Orthogonal Rotation-Invariant Image Representation [10.420952921069759]
We introduce textbfPSepT (Polar Separable Transform), a separable orthogonal transform that overcomes the non-separability barrier in polar coordinates.<n>Results show better numerical stability, computational efficiency, and competitive classification performance on structured datasets.<n>The framework enables high-order moment analysis previously infeasible with classical methods, opening new possibilities for robust image analysis applications.
arXiv Detail & Related papers (2025-10-10T08:26:09Z) - Tensor Decomposition Networks for Fast Machine Learning Interatomic Potential Computations [48.46721044282335]
tensor decomposition networks (TDNs) achieve competitive performance with dramatic speedup in computations.<n>We evaluate TDNs on PubChemQCR, a newly curated molecular relaxation dataset containing 105 million DFT-calculated snapshots.<n>Results show that TDNs achieve competitive performance with dramatic speedup in computations.
arXiv Detail & Related papers (2025-07-01T18:46:27Z) - Efficient Network Automatic Relevance Determination [30.611086842690426]
Network Automatic Relevance Determination (NARD) is an extension of ARD for linearly probabilistic models.<n>NARD simultaneously model sparse relationships between inputs $X in mathbb Rd times N$ and outputs $Y in mathbb Rm times N$.
arXiv Detail & Related papers (2025-06-14T05:20:25Z) - Tensor Train Multiplication [0.0]
The computational complexity and memory requirements of the TTM algorithm scale as $chi3$ and $chi2$, respectively.
This represents a significant improvement compared with the conventional approach.
The TTM algorithm paves the way towards GPU accelerated tensor network simulations of computational fluid dynamics problems with large bond dimensions.
arXiv Detail & Related papers (2024-10-10T12:36:49Z) - Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - CORE: Common Random Reconstruction for Distributed Optimization with
Provable Low Communication Complexity [110.50364486645852]
Communication complexity has become a major bottleneck for speeding up training and scaling up machine numbers.
We propose Common Om REOm, which can be used to compress information transmitted between machines.
arXiv Detail & Related papers (2023-09-23T08:45:27Z) - Equivalence Between SE(3) Equivariant Networks via Steerable Kernels and
Group Convolution [90.67482899242093]
A wide range of techniques have been proposed in recent years for designing neural networks for 3D data that are equivariant under rotation and translation of the input.
We provide an in-depth analysis of both methods and their equivalence and relate the two constructions to multiview convolutional networks.
We also derive new TFN non-linearities from our equivalence principle and test them on practical benchmark datasets.
arXiv Detail & Related papers (2022-11-29T03:42:11Z) - Average-Case Complexity of Tensor Decomposition for Low-Degree
Polynomials [93.59919600451487]
"Statistical-computational gaps" occur in many statistical inference tasks.
We consider a model for random order-3 decomposition where one component is slightly larger in norm than the rest.
We show that tensor entries can accurately estimate the largest component when $ll n3/2$ but fail to do so when $rgg n3/2$.
arXiv Detail & Related papers (2022-11-10T00:40:37Z) - Complex-to-Real Sketches for Tensor Products with Applications to the
Polynomial Kernel [15.535749953841274]
Randomized sketches of a product of $p$ vectors follow a tradeoff between statistical efficiency and computational acceleration.
We propose a simple Complex-to-Real (CtR) modification of well-known sketches that replaces real random projections by complex ones.
We show that our method achieves state-of-the-art performance in terms of accuracy and speed compared to other randomized approximations from the literature.
arXiv Detail & Related papers (2022-02-04T09:15:43Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - Alternating minimization algorithms for graph regularized tensor
completion [8.26185178671935]
We consider a Canonical Polyadic (CP) decomposition approach to low-rank tensor completion (LRTC)
The usage of graph regularization entails benefits in the learning accuracy of LRTC, but at the same time, induces coupling graph Laplacian terms.
We propose efficient alternating minimization algorithms by leveraging the block structure of the underlying CP decomposition-based model.
arXiv Detail & Related papers (2020-08-28T23:20:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.