Low-Multi-Rank High-Order Bayesian Robust Tensor Factorization
- URL: http://arxiv.org/abs/2311.05888v1
- Date: Fri, 10 Nov 2023 06:15:38 GMT
- Title: Low-Multi-Rank High-Order Bayesian Robust Tensor Factorization
- Authors: Jianan Liu and Chunguang Li
- Abstract summary: We propose a novel high-order TRPCA method, named as Low-Multi-rank High-order Robust Factorization (LMH-BRTF) within the Bayesian framework.
Specifically, we decompose the observed corrupted tensor into three parts, i.e., the low-rank component, the sparse component, and the noise component.
By constructing a low-rank model for the low-rank component based on the order-$d$ t-SVD, LMH-BRTF can automatically determine the tensor multi-rank.
- Score: 7.538654977500241
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recently proposed tensor robust principal component analysis (TRPCA)
methods based on tensor singular value decomposition (t-SVD) have achieved
numerous successes in many fields. However, most of these methods are only
applicable to third-order tensors, whereas the data obtained in practice are
often of higher order, such as fourth-order color videos, fourth-order
hyperspectral videos, and fifth-order light-field images. Additionally, in the
t-SVD framework, the multi-rank of a tensor can describe more fine-grained
low-rank structure in the tensor compared with the tubal rank. However,
determining the multi-rank of a tensor is a much more difficult problem than
determining the tubal rank. Moreover, most of the existing TRPCA methods do not
explicitly model the noises except the sparse noise, which may compromise the
accuracy of estimating the low-rank tensor. In this work, we propose a novel
high-order TRPCA method, named as Low-Multi-rank High-order Bayesian Robust
Tensor Factorization (LMH-BRTF), within the Bayesian framework. Specifically,
we decompose the observed corrupted tensor into three parts, i.e., the low-rank
component, the sparse component, and the noise component. By constructing a
low-rank model for the low-rank component based on the order-$d$ t-SVD and
introducing a proper prior for the model, LMH-BRTF can automatically determine
the tensor multi-rank. Meanwhile, benefiting from the explicit modeling of both
the sparse and noise components, the proposed method can leverage information
from the noises more effectivly, leading to an improved performance of TRPCA.
Then, an efficient variational inference algorithm is established for
parameters estimation. Empirical studies on synthetic and real-world datasets
demonstrate the effectiveness of the proposed method in terms of both
qualitative and quantitative results.
Related papers
- High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise [96.80184504268593]
gradient, clipping is one of the key algorithmic ingredients to derive good high-probability guarantees.
Clipping can spoil the convergence of the popular methods for composite and distributed optimization.
arXiv Detail & Related papers (2023-10-03T07:49:17Z) - Nonconvex Robust High-Order Tensor Completion Using Randomized Low-Rank
Approximation [29.486258609570545]
Two efficient low-rank approximation approaches are first devised under the order high-artd (d >= 3) T-SVD framework.
The proposed method outperforms other state-of-the-art approaches in terms of both computational efficiency and estimated precision.
arXiv Detail & Related papers (2023-05-19T07:51:36Z) - Tensor Denoising via Amplification and Stable Rank Methods [3.6107666045714533]
We adapt the previously developed framework of tensor amplification to denoising synthetic tensors of various sizes, ranks, and noise levels.
We also introduce denoising methods based on two variations of rank estimates called stable $X$-rank and stable slice rank.
The experimental results show that in the low rank context, tensor-based amplification provides comparable denoising performance in high signal-to-noise ratio settings.
arXiv Detail & Related papers (2023-01-10T02:46:09Z) - Clipped Stochastic Methods for Variational Inequalities with
Heavy-Tailed Noise [64.85879194013407]
We prove the first high-probability results with logarithmic dependence on the confidence level for methods for solving monotone and structured non-monotone VIPs.
Our results match the best-known ones in the light-tails case and are novel for structured non-monotone problems.
In addition, we numerically validate that the gradient noise of many practical formulations is heavy-tailed and show that clipping improves the performance of SEG/SGDA.
arXiv Detail & Related papers (2022-06-02T15:21:55Z) - DeepTensor: Low-Rank Tensor Decomposition with Deep Network Priors [45.183204988990916]
DeepTensor is a framework for low-rank decomposition of matrices and tensors using deep generative networks.
We explore a range of real-world applications, including hyperspectral image denoising, 3D MRI tomography, and image classification.
arXiv Detail & Related papers (2022-04-07T01:09:58Z) - Treatment Learning Causal Transformer for Noisy Image Classification [62.639851972495094]
In this work, we incorporate this binary information of "existence of noise" as treatment into image classification tasks to improve prediction accuracy.
Motivated from causal variational inference, we propose a transformer-based architecture, that uses a latent generative model to estimate robust feature representations for noise image classification.
We also create new noisy image datasets incorporating a wide range of noise factors for performance benchmarking.
arXiv Detail & Related papers (2022-03-29T13:07:53Z) - Robust Tensor Principal Component Analysis: Exact Recovery via
Deterministic Model [5.414544833902815]
This paper proposes a new method to analyze Robust tensor principal component analysis (RTPCA)
It is based on the recently developed tensor-tensor product and tensor singular value decomposition (t-SVD)
arXiv Detail & Related papers (2020-08-05T16:26:10Z) - Enhanced nonconvex low-rank approximation of tensor multi-modes for
tensor completion [1.3406858660972554]
We propose a novel low-rank approximation tensor multi-modes (LRATM)
A block-bound method-based algorithm is designed to efficiently solve the proposed model.
Numerical results on three types of public multi-dimensional datasets have tested and shown that our algorithm can recover a variety of low-rank tensors.
arXiv Detail & Related papers (2020-05-28T08:53:54Z) - Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient
Clipping [69.9674326582747]
We propose a new accelerated first-order method called clipped-SSTM for smooth convex optimization with heavy-tailed distributed noise in gradients.
We prove new complexity that outperform state-of-the-art results in this case.
We derive the first non-trivial high-probability complexity bounds for SGD with clipping without light-tails assumption on the noise.
arXiv Detail & Related papers (2020-05-21T17:05:27Z) - Multi-View Spectral Clustering Tailored Tensor Low-Rank Representation [105.33409035876691]
This paper explores the problem of multi-view spectral clustering (MVSC) based on tensor low-rank modeling.
We design a novel structured tensor low-rank norm tailored to MVSC.
We show that the proposed method outperforms state-of-the-art methods to a significant extent.
arXiv Detail & Related papers (2020-04-30T11:52:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.