Alternating linear scheme in a Bayesian framework for low-rank tensor
approximation
- URL: http://arxiv.org/abs/2012.11228v1
- Date: Mon, 21 Dec 2020 10:15:30 GMT
- Title: Alternating linear scheme in a Bayesian framework for low-rank tensor
approximation
- Authors: Clara Menzen, Manon Kok, Kim Batselier
- Abstract summary: We find a low-rank representation for a given tensor by solving a Bayesian inference problem.
We present an algorithm that performs the unscented transform in tensor train format.
- Score: 5.833272638548154
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Multiway data often naturally occurs in a tensorial format which can be
approximately represented by a low-rank tensor decomposition. This is useful
because complexity can be significantly reduced and the treatment of
large-scale data sets can be facilitated. In this paper, we find a low-rank
representation for a given tensor by solving a Bayesian inference problem. This
is achieved by dividing the overall inference problem into sub-problems where
we sequentially infer the posterior distribution of one tensor decomposition
component at a time. This leads to a probabilistic interpretation of the
well-known iterative algorithm alternating linear scheme (ALS). In this way,
the consideration of measurement noise is enabled, as well as the incorporation
of application-specific prior knowledge and the uncertainty quantification of
the low-rank tensor estimate. To compute the low-rank tensor estimate from the
posterior distributions of the tensor decomposition components, we present an
algorithm that performs the unscented transform in tensor train format.
Related papers
- Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - TERM Model: Tensor Ring Mixture Model for Density Estimation [48.622060998018206]
In this paper, we take tensor ring decomposition for density estimator, which significantly reduces the number of permutation candidates.
A mixture model that incorporates multiple permutation candidates with adaptive weights is further designed, resulting in increased expressive flexibility.
This approach acknowledges that suboptimal permutations can offer distinctive information besides that of optimal permutations.
arXiv Detail & Related papers (2023-12-13T11:39:56Z) - Decomposition of linear tensor transformations [0.0]
The aim of this paper is to develop a mathematical framework for exact tensor decomposition.
In the paper three different problems will be carried out to derive.
arXiv Detail & Related papers (2023-09-14T16:14:38Z) - Tensor Factorization via Transformed Tensor-Tensor Product for Image
Alignment [3.0969191504482243]
We study the problem of a batch of linearly correlated image alignment, where the observed images are deformed by some unknown domain transformations.
By stacking these images as the frontal slices of a third-order tensor, we propose to explore the low-rankness of the underlying tensor.
arXiv Detail & Related papers (2022-12-12T05:52:26Z) - Low-Rank Tensor Function Representation for Multi-Dimensional Data
Recovery [52.21846313876592]
Low-rank tensor function representation (LRTFR) can continuously represent data beyond meshgrid with infinite resolution.
We develop two fundamental concepts for tensor functions, i.e., the tensor function rank and low-rank tensor function factorization.
Our method substantiates the superiority and versatility of our method as compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-12-01T04:00:38Z) - Deep importance sampling using tensor trains with application to a
priori and a posteriori rare event estimation [2.4815579733050153]
We propose a deep importance sampling method that is suitable for estimating rare event probabilities in high-dimensional problems.
We approximate the optimal importance distribution in a general importance sampling problem as the pushforward of a reference distribution under a composition of order-preserving transformations.
The squared tensor-train decomposition provides a scalable ansatz for building order-preserving high-dimensional transformations via density approximations.
arXiv Detail & Related papers (2022-09-05T12:44:32Z) - Error Analysis of Tensor-Train Cross Approximation [88.83467216606778]
We provide accuracy guarantees in terms of the entire tensor for both exact and noisy measurements.
Results are verified by numerical experiments, and may have important implications for the usefulness of cross approximations for high-order tensors.
arXiv Detail & Related papers (2022-07-09T19:33:59Z) - Weight Vector Tuning and Asymptotic Analysis of Binary Linear
Classifiers [82.5915112474988]
This paper proposes weight vector tuning of a generic binary linear classifier through the parameterization of a decomposition of the discriminant by a scalar.
It is also found that weight vector tuning significantly improves the performance of Linear Discriminant Analysis (LDA) under high estimation noise.
arXiv Detail & Related papers (2021-10-01T17:50:46Z) - Understanding Deflation Process in Over-parametrized Tensor
Decomposition [17.28303004783945]
We study the training dynamics for gradient flow on over-parametrized tensor decomposition problems.
Empirically, such training process often first fits larger components and then discovers smaller components.
arXiv Detail & Related papers (2021-06-11T18:51:36Z) - Scaling and Scalability: Provable Nonconvex Low-Rank Tensor Estimation
from Incomplete Measurements [30.395874385570007]
A fundamental task is to faithfully recover tensors from highly incomplete measurements.
We develop an algorithm to directly recover the tensor factors in the Tucker decomposition.
We show that it provably converges at a linear independent rate of the ground truth tensor for two canonical problems.
arXiv Detail & Related papers (2021-04-29T17:44:49Z) - Uncertainty quantification for nonconvex tensor completion: Confidence
intervals, heteroscedasticity and optimality [92.35257908210316]
We study the problem of estimating a low-rank tensor given incomplete and corrupted observations.
We find that it attains unimprovable rates $ell-2$ accuracy.
arXiv Detail & Related papers (2020-06-15T17:47:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.