When Bayesian Tensor Completion Meets Multioutput Gaussian Processes: Functional Universality and Rank Learning
- URL: http://arxiv.org/abs/2512.21486v1
- Date: Thu, 25 Dec 2025 03:15:52 GMT
- Title: When Bayesian Tensor Completion Meets Multioutput Gaussian Processes: Functional Universality and Rank Learning
- Authors: Siyuan Li, Shikai Fang, Lei Cheng, Feng Yin, Yik-Chung Wu, Peter Gerstoft, Sergios Theodoridis,
- Abstract summary: Functional tensor decomposition can analyze multi-dimensional data with real-valued indices.<n>We propose a rank-revealing functional low-rank tensor completion (RR-F) method.<n>We establish the universal approximation property of the model for continuous multi-dimensional signals.
- Score: 53.17227599983122
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Functional tensor decomposition can analyze multi-dimensional data with real-valued indices, paving the path for applications in machine learning and signal processing. A limitation of existing approaches is the assumption that the tensor rank-a critical parameter governing model complexity-is known. However, determining the optimal rank is a non-deterministic polynomial-time hard (NP-hard) task and there is a limited understanding regarding the expressive power of functional low-rank tensor models for continuous signals. We propose a rank-revealing functional Bayesian tensor completion (RR-FBTC) method. Modeling the latent functions through carefully designed multioutput Gaussian processes, RR-FBTC handles tensors with real-valued indices while enabling automatic tensor rank determination during the inference process. We establish the universal approximation property of the model for continuous multi-dimensional signals, demonstrating its expressive power in a concise format. To learn this model, we employ the variational inference framework and derive an efficient algorithm with closed-form updates. Experiments on both synthetic and real-world datasets demonstrate the effectiveness and superiority of the RR-FBTC over state-of-the-art approaches. The code is available at https://github.com/OceanSTARLab/RR-FBTC.
Related papers
- Tensor Network Based Feature Learning Model [6.101839518775971]
Feature Learning (FL) model represents tensor-product features as a learnable Canonical Polyadic Decomposition (CPD)<n>We prove the effectiveness of the FL model through experiments on real data of various dimensionality and scale.
arXiv Detail & Related papers (2025-12-02T09:17:21Z) - Emergence in non-neural models: grokking modular arithmetic via average gradient outer product [16.911836722312152]
We show that grokking is not specific to neural networks nor to gradient descent-based optimization.<n>We show that this phenomenon occurs when learning modular arithmetic with Recursive Feature Machines.<n>Our results demonstrate that emergence can result purely from learning task-relevant features.
arXiv Detail & Related papers (2024-07-29T17:28:58Z) - Dynamic Tensor Decomposition via Neural Diffusion-Reaction Processes [24.723536390322582]
tensor decomposition is an important tool for multiway data analysis.
We propose Dynamic EMbedIngs fOr dynamic algorithm dEcomposition (DEMOTE)
We show the advantage of our approach in both simulation study and real-world applications.
arXiv Detail & Related papers (2023-10-30T15:49:45Z) - Versatile Neural Processes for Learning Implicit Neural Representations [57.090658265140384]
We propose Versatile Neural Processes (VNP), which largely increases the capability of approximating functions.
Specifically, we introduce a bottleneck encoder that produces fewer and informative context tokens, relieving the high computational cost.
We demonstrate the effectiveness of the proposed VNP on a variety of tasks involving 1D, 2D and 3D signals.
arXiv Detail & Related papers (2023-01-21T04:08:46Z) - Low-Rank Tensor Function Representation for Multi-Dimensional Data
Recovery [52.21846313876592]
Low-rank tensor function representation (LRTFR) can continuously represent data beyond meshgrid with infinite resolution.
We develop two fundamental concepts for tensor functions, i.e., the tensor function rank and low-rank tensor function factorization.
Our method substantiates the superiority and versatility of our method as compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-12-01T04:00:38Z) - Softmax-free Linear Transformers [90.83157268265654]
Vision transformers (ViTs) have pushed the state-of-the-art for visual perception tasks.
Existing methods are either theoretically flawed or empirically ineffective for visual recognition.
We propose a family of Softmax-Free Transformers (SOFT)
arXiv Detail & Related papers (2022-07-05T03:08:27Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - Towards Flexible Sparsity-Aware Modeling: Automatic Tensor Rank Learning
Using The Generalized Hyperbolic Prior [24.848237413017937]
rank learning for canonical polyadic decomposition (CPD) has long been deemed as an essential yet challenging problem.
The optimal determination of a tensor rank is known to be a non-deterministic-time hard (NP-hard) task.
In this paper, we introduce a more advanced generalized hyperbolic (GH) prior to the probabilistic modeling model, which is more flexible to adapt to different levels of sparsity.
arXiv Detail & Related papers (2020-09-05T06:07:21Z) - Alternating minimization algorithms for graph regularized tensor
completion [8.26185178671935]
We consider a Canonical Polyadic (CP) decomposition approach to low-rank tensor completion (LRTC)
The usage of graph regularization entails benefits in the learning accuracy of LRTC, but at the same time, induces coupling graph Laplacian terms.
We propose efficient alternating minimization algorithms by leveraging the block structure of the underlying CP decomposition-based model.
arXiv Detail & Related papers (2020-08-28T23:20:49Z) - UNIPoint: Universally Approximating Point Processes Intensities [125.08205865536577]
We provide a proof that a class of learnable functions can universally approximate any valid intensity function.
We implement UNIPoint, a novel neural point process model, using recurrent neural networks to parameterise sums of basis function upon each event.
arXiv Detail & Related papers (2020-07-28T09:31:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.