Variational Bayesian Inference for Tensor Robust Principal Component Analysis
- URL: http://arxiv.org/abs/2412.18717v1
- Date: Wed, 25 Dec 2024 00:29:09 GMT
- Title: Variational Bayesian Inference for Tensor Robust Principal Component Analysis
- Authors: Chao Wang, Huiwen Zheng, Raymond Chan, Youwen Wen,
- Abstract summary: Current approaches often encounter difficulties in accurately capturing the low-rank properties of tensors.
We introduce a Bayesian framework for TRPCA, which integrates a low-rank tensor nuclear norm prior and a generalized sparsity-inducing prior.
Our method can be efficiently extended to the weighted tensor nuclear norm model.
- Score: 2.6623354466198412
- License:
- Abstract: Tensor Robust Principal Component Analysis (TRPCA) holds a crucial position in machine learning and computer vision. It aims to recover underlying low-rank structures and characterizing the sparse structures of noise. Current approaches often encounter difficulties in accurately capturing the low-rank properties of tensors and balancing the trade-off between low-rank and sparse components, especially in a mixed-noise scenario. To address these challenges, we introduce a Bayesian framework for TRPCA, which integrates a low-rank tensor nuclear norm prior and a generalized sparsity-inducing prior. By embedding the proposed priors within the Bayesian framework, our method can automatically determine the optimal tensor nuclear norm and achieve a balance between the nuclear norm and sparse components. Furthermore, our method can be efficiently extended to the weighted tensor nuclear norm model. Experiments conducted on synthetic and real-world datasets demonstrate the effectiveness and superiority of our method compared to state-of-the-art approaches.
Related papers
- Learnable Scaled Gradient Descent for Guaranteed Robust Tensor PCA [39.084456109467204]
We propose an efficient scaled gradient descent (SGD) approach within the t-SVD framework for the first time.
We show that RTPCA-SGD achieves linear convergence to the true low-rank tensor at a constant rate, independent of the condition number.
arXiv Detail & Related papers (2025-01-08T15:25:19Z) - Tight Stability, Convergence, and Robustness Bounds for Predictive Coding Networks [60.3634789164648]
Energy-based learning algorithms, such as predictive coding (PC), have garnered significant attention in the machine learning community.
We rigorously analyze the stability, robustness, and convergence of PC through the lens of dynamical systems theory.
arXiv Detail & Related papers (2024-10-07T02:57:26Z) - Low-Multi-Rank High-Order Bayesian Robust Tensor Factorization [7.538654977500241]
We propose a novel high-order TRPCA method, named as Low-Multi-rank High-order Robust Factorization (LMH-BRTF) within the Bayesian framework.
Specifically, we decompose the observed corrupted tensor into three parts, i.e., the low-rank component, the sparse component, and the noise component.
By constructing a low-rank model for the low-rank component based on the order-$d$ t-SVD, LMH-BRTF can automatically determine the tensor multi-rank.
arXiv Detail & Related papers (2023-11-10T06:15:38Z) - Provable Guarantees for Generative Behavior Cloning: Bridging Low-Level
Stability and High-Level Behavior [51.60683890503293]
We propose a theoretical framework for studying behavior cloning of complex expert demonstrations using generative modeling.
We show that pure supervised cloning can generate trajectories matching the per-time step distribution of arbitrary expert trajectories.
arXiv Detail & Related papers (2023-07-27T04:27:26Z) - Deep Unfolded Tensor Robust PCA with Self-supervised Learning [21.710932587432396]
We describe a fast and simple self-supervised model for tensor RPCA using deep unfolding.
Our model expunges the need for ground truth labels while maintaining competitive or even greater performance.
We demonstrate these claims on a mix of synthetic data and real-world tasks.
arXiv Detail & Related papers (2022-12-21T20:34:42Z) - Global Weighted Tensor Nuclear Norm for Tensor Robust Principal
Component Analysis [25.848106663205865]
This paper develops a new Global Weighted TRPCA method (GWTRPCA)
It is the first approach simultaneously considers the significance of intra-frontal slice and inter-frontal slice singular values in the Fourier domain.
Exploiting this global information, GWTRPCA penalizes the larger singular values less and assigns smaller weights to them.
arXiv Detail & Related papers (2022-09-28T13:27:10Z) - Implicit Full Waveform Inversion with Deep Neural Representation [91.3755431537592]
We propose the implicit full waveform inversion (IFWI) algorithm using continuously and implicitly defined deep neural representations.
Both theoretical and experimental analyses indicates that, given a random initial model, IFWI is able to converge to the global minimum.
IFWI has a certain degree of robustness and strong generalization ability that are exemplified in the experiments of various 2D geological models.
arXiv Detail & Related papers (2022-09-08T01:54:50Z) - Defensive Tensorization [113.96183766922393]
We propose tensor defensiveization, an adversarial defence technique that leverages a latent high-order factorization of the network.
We empirically demonstrate the effectiveness of our approach on standard image classification benchmarks.
We validate the versatility of our approach across domains and low-precision architectures by considering an audio task and binary networks.
arXiv Detail & Related papers (2021-10-26T17:00:16Z) - Robust Tensor Principal Component Analysis: Exact Recovery via
Deterministic Model [5.414544833902815]
This paper proposes a new method to analyze Robust tensor principal component analysis (RTPCA)
It is based on the recently developed tensor-tensor product and tensor singular value decomposition (t-SVD)
arXiv Detail & Related papers (2020-08-05T16:26:10Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z) - Multi-View Spectral Clustering Tailored Tensor Low-Rank Representation [105.33409035876691]
This paper explores the problem of multi-view spectral clustering (MVSC) based on tensor low-rank modeling.
We design a novel structured tensor low-rank norm tailored to MVSC.
We show that the proposed method outperforms state-of-the-art methods to a significant extent.
arXiv Detail & Related papers (2020-04-30T11:52:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.