Laplace Approximation For Tensor Train Kernel Machines In System Identification
- URL: http://arxiv.org/abs/2512.02532v1
- Date: Tue, 02 Dec 2025 08:55:59 GMT
- Title: Laplace Approximation For Tensor Train Kernel Machines In System Identification
- Authors: Albert Saiapin, Kim Batselier,
- Abstract summary: We introduce a Bayesian tensor train kernel machine that applies Laplace approximation to estimate the posterior distribution over a selected TT-core.<n>Experiments show that core selection is largely independent of TT-ranks and feature structure, and that VI replaces cross-validation while offering up to 65x faster training.
- Score: 6.101839518775971
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To address the scalability limitations of Gaussian process (GP) regression, several approximation techniques have been proposed. One such method is based on tensor networks, which utilizes an exponential number of basis functions without incurring exponential computational cost. However, extending this model to a fully probabilistic formulation introduces several design challenges. In particular, for tensor train (TT) models, it is unclear which TT-core should be treated in a Bayesian manner. We introduce a Bayesian tensor train kernel machine that applies Laplace approximation to estimate the posterior distribution over a selected TT-core and employs variational inference (VI) for precision hyperparameters. Experiments show that core selection is largely independent of TT-ranks and feature structure, and that VI replaces cross-validation while offering up to 65x faster training. The method's effectiveness is demonstrated on an inverse dynamics problem.
Related papers
- When Bayesian Tensor Completion Meets Multioutput Gaussian Processes: Functional Universality and Rank Learning [53.17227599983122]
Functional tensor decomposition can analyze multi-dimensional data with real-valued indices.<n>We propose a rank-revealing functional low-rank tensor completion (RR-F) method.<n>We establish the universal approximation property of the model for continuous multi-dimensional signals.
arXiv Detail & Related papers (2025-12-25T03:15:52Z) - Kernel Regression of Multi-Way Data via Tensor Trains with Hadamard Overparametrization: The Dynamic Graph Flow Case [9.941965164307843]
Kernel Regression via Trains with Hadamard overparametrization (KReTTaH) is a regression-based framework for interpretable multi-way data imputation.<n>KReTTaH consistently outperforms state-of-the-art alternatives.
arXiv Detail & Related papers (2025-09-26T11:00:05Z) - Interpretable Bayesian Tensor Network Kernel Machines with Automatic Rank and Feature Selection [5.240890834159944]
Network Kernel Machines speed up model learning by representing parameters as low-rank TNs.<n>We propose a fully probabilistic framework that uses sparsity-inducing hierarchical priors on TN factors to infer model complexity.
arXiv Detail & Related papers (2025-07-15T09:37:49Z) - Tensor train representations of Greeks for Fourier-based pricing of multi-asset options [0.0]
Efficient computation of Greeks for multi-asset options remains a key challenge in quantitative finance.<n>We propose a framework to compute Greeks in a single evaluation of a tensor train (TT)<n> Numerical experiments on a five-asset min-call option in the Black-Sholes model show significant speed-ups of up to about $105 times$ over Monte Carlo simulation.
arXiv Detail & Related papers (2025-07-11T10:51:17Z) - Score-Based Model for Low-Rank Tensor Recovery [49.158601255093416]
Low-rank tensor decompositions (TDs) provide an effective framework for multiway data analysis.<n>Traditional TD methods rely on predefined structural assumptions, such as CP or Tucker decompositions.<n>We propose a score-based model that eliminates the need for predefined structural or distributional assumptions.
arXiv Detail & Related papers (2025-06-27T15:05:37Z) - TensorGRaD: Tensor Gradient Robust Decomposition for Memory-Efficient Neural Operator Training [91.8932638236073]
We introduce textbfTensorGRaD, a novel method that directly addresses the memory challenges associated with large-structured weights.<n>We show that sparseGRaD reduces total memory usage by over $50%$ while maintaining and sometimes even improving accuracy.
arXiv Detail & Related papers (2025-01-04T20:51:51Z) - Unveiling the Statistical Foundations of Chain-of-Thought Prompting Methods [59.779795063072655]
Chain-of-Thought (CoT) prompting and its variants have gained popularity as effective methods for solving multi-step reasoning problems.
We analyze CoT prompting from a statistical estimation perspective, providing a comprehensive characterization of its sample complexity.
arXiv Detail & Related papers (2024-08-25T04:07:18Z) - Computational and Statistical Guarantees for Tensor-on-Tensor Regression with Tensor Train Decomposition [27.29463801531576]
We study the theoretical and algorithmic aspects of the TT-based ToT regression model.<n>We propose two algorithms to efficiently find solutions to constrained error bounds.<n>We establish the linear convergence rate of both IHT and RGD.
arXiv Detail & Related papers (2024-06-10T03:51:38Z) - Variance-Reducing Couplings for Random Features [57.73648780299374]
Random features (RFs) are a popular technique to scale up kernel methods in machine learning.
We find couplings to improve RFs defined on both Euclidean and discrete input spaces.
We reach surprising conclusions about the benefits and limitations of variance reduction as a paradigm.
arXiv Detail & Related papers (2024-05-26T12:25:09Z) - Handling The Non-Smooth Challenge in Tensor SVD: A Multi-Objective Tensor Recovery Framework [15.16222081389267]
We introduce a novel tensor recovery model with a learnable tensor nuclear norm to address the challenge of non-smooth changes in tensor data.
We develop a new optimization algorithm named the Alternating Proximal Multiplier Method (APMM) to iteratively solve the proposed tensor completion model.
In addition, we propose a multi-objective tensor recovery framework based on APMM to efficiently explore the correlations of tensor data across its various dimensions.
arXiv Detail & Related papers (2023-11-23T12:16:33Z) - Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels [78.6096486885658]
We introduce lower bounds to the linearized Laplace approximation of the marginal likelihood.
These bounds are amenable togradient-based optimization and allow to trade off estimation accuracy against computational complexity.
arXiv Detail & Related papers (2023-06-06T19:02:57Z) - Error Analysis of Tensor-Train Cross Approximation [88.83467216606778]
We provide accuracy guarantees in terms of the entire tensor for both exact and noisy measurements.
Results are verified by numerical experiments, and may have important implications for the usefulness of cross approximations for high-order tensors.
arXiv Detail & Related papers (2022-07-09T19:33:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.