Bayesian Robust Tensor Ring Model for Incomplete Multiway Data
- URL: http://arxiv.org/abs/2202.13321v1
- Date: Sun, 27 Feb 2022 09:25:24 GMT
- Title: Bayesian Robust Tensor Ring Model for Incomplete Multiway Data
- Authors: Zhenhao Huang, Guoxu Zhou, Yuning Qiu
- Abstract summary: Low-rank tensor completion aims to recover missing entries from the observed data.
In this paper, we propose a robust tensor ring (BRTR) decomposition method for RTC problem.
Experiments indicate that BRTR has better recovery performance and ability to remove noise than other state-of-the-art methods.
- Score: 7.765112574724006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-rank tensor completion aims to recover missing entries from the observed
data. However, the observed data may be disturbed by noise and outliers.
Therefore, robust tensor completion (RTC) is proposed to solve this problem.
The recently proposed tensor ring (TR) structure is applied to RTC due to its
superior abilities in dealing with high-dimensional data with predesigned TR
rank. To avoid manual rank selection and achieve a balance between low-rank
component and sparse component, in this paper, we propose a Bayesian robust
tensor ring (BRTR) decomposition method for RTC problem. Furthermore, we
develop a variational Bayesian (VB) algorithm to infer the probability
distribution of posteriors. During the learning process, the frontal slices of
previous tensor and horizontal slices of latter tensor shared with the same TR
rank with zero components are pruned, resulting in automatic rank
determination. Compared with existing methods, BRTR can automatically learn TR
rank without manual fine-tuning of parameters. Extensive experiments indicate
that BRTR has better recovery performance and ability to remove noise than
other state-of-the-art methods.
Related papers
- Estimating the Hessian Matrix of Ranking Objectives for Stochastic Learning to Rank with Gradient Boosted Trees [63.18324983384337]
We introduce the first learning to rank method for Gradient Boosted Decision Trees (GBDTs)
Our main contribution is a novel estimator for the second-order derivatives, i.e., the Hessian matrix.
We incorporate our estimator into the existing PL-Rank framework, which was originally designed for first-order derivatives only.
arXiv Detail & Related papers (2024-04-18T13:53:32Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Low-Multi-Rank High-Order Bayesian Robust Tensor Factorization [7.538654977500241]
We propose a novel high-order TRPCA method, named as Low-Multi-rank High-order Robust Factorization (LMH-BRTF) within the Bayesian framework.
Specifically, we decompose the observed corrupted tensor into three parts, i.e., the low-rank component, the sparse component, and the noise component.
By constructing a low-rank model for the low-rank component based on the order-$d$ t-SVD, LMH-BRTF can automatically determine the tensor multi-rank.
arXiv Detail & Related papers (2023-11-10T06:15:38Z) - Robust Data Clustering with Outliers via Transformed Tensor Low-Rank Representation [4.123899820318987]
tensor low-rank representation (TLRR) has become a popular tool for tensor data recovery and clustering.
This paper develops an outlier-robust tensor low-rank representation (OR-TLRR)
OR-TLRR provides outlier detection and tensor data clustering simultaneously based on the t-SVD framework.
arXiv Detail & Related papers (2023-07-18T08:11:08Z) - Scalable and Robust Tensor Ring Decomposition for Large-scale Data [12.02023514105999]
We propose a scalable and robust TR decomposition algorithm capable of handling large-scale tensor data with missing entries and gross corruptions.
We first develop a novel auto-weighted steepest descent method that can adaptively fill the missing entries and identify the outliers during the decomposition process.
arXiv Detail & Related papers (2023-05-15T22:08:47Z) - Latent Class-Conditional Noise Model [54.56899309997246]
We introduce a Latent Class-Conditional Noise model (LCCN) to parameterize the noise transition under a Bayesian framework.
We then deduce a dynamic label regression method for LCCN, whose Gibbs sampler allows us efficiently infer the latent true labels.
Our approach safeguards the stable update of the noise transition, which avoids previous arbitrarily tuning from a mini-batch of samples.
arXiv Detail & Related papers (2023-02-19T15:24:37Z) - Knockoffs-SPR: Clean Sample Selection in Learning with Noisy Labels [56.81761908354718]
We propose a novel theoretically guaranteed clean sample selection framework for learning with noisy labels.
Knockoffs-SPR can be regarded as a sample selection module for a standard supervised training pipeline.
We further combine it with a semi-supervised algorithm to exploit the support of noisy data as unlabeled data.
arXiv Detail & Related papers (2023-01-02T07:13:28Z) - Bayesian Low Rank Tensor Ring Model for Image Completion [44.148303000278574]
Low rank tensor ring model is powerful for image completion which recovers missing entries in data acquisition and transformation.
In this paper, we present a Bayesian low rank tensor ring model for image completion by automatically learning the low rank structure of data.
arXiv Detail & Related papers (2020-06-29T02:58:25Z) - Tensor completion via nonconvex tensor ring rank minimization with
guaranteed convergence [16.11872681638052]
In recent studies, the tensor ring (TR) rank has shown high effectiveness in tensor completion.
A recently proposed TR rank is based on capturing the structure within the weighted sum penalizing the singular value equally.
In this paper, we propose to use the logdet-based function as a non smooth relaxation for solutions practice.
arXiv Detail & Related papers (2020-05-14T03:13:17Z) - TRP: Trained Rank Pruning for Efficient Deep Neural Networks [69.06699632822514]
We propose Trained Rank Pruning (TRP), which alternates between low rank approximation and training.
A nuclear regularization optimized by sub-gradient descent is utilized to further promote low rank in TRP.
The TRP trained network inherently has a low-rank structure, and is approximated with negligible performance loss.
arXiv Detail & Related papers (2020-04-30T03:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.