Tensor Robust PCA with Nonconvex and Nonlocal Regularization
- URL: http://arxiv.org/abs/2211.02404v2
- Date: Fri, 7 Jul 2023 13:25:16 GMT
- Title: Tensor Robust PCA with Nonconvex and Nonlocal Regularization
- Authors: Xiaoyu Geng, Qiang Guo, Shuaixiong Hui, Ming Yang and Caiming Zhang
- Abstract summary: We develop a nonlocality TRPCA (N-TRPCA) model for low-rank data recovery.
We show that the proposed N-TRPCA outperforms existing methods in visual data recovery.
- Score: 16.15616361268236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tensor robust principal component analysis (TRPCA) is a classical way for
low-rank tensor recovery, which minimizes the convex surrogate of tensor rank
by shrinking each tensor singular value equally. However, for real-world visual
data, large singular values represent more significant information than small
singular values. In this paper, we propose a nonconvex TRPCA (N-TRPCA) model
based on the tensor adjustable logarithmic norm. Unlike TRPCA, our N-TRPCA can
adaptively shrink small singular values more and shrink large singular values
less. In addition, TRPCA assumes that the whole data tensor is of low rank.
This assumption is hardly satisfied in practice for natural visual data,
restricting the capability of TRPCA to recover the edges and texture details
from noisy images and videos. To this end, we integrate nonlocal
self-similarity into N-TRPCA, and further develop a nonconvex and nonlocal
TRPCA (NN-TRPCA) model. Specifically, similar nonlocal patches are grouped as a
tensor and then each group tensor is recovered by our N-TRPCA. Since the
patches in one group are highly correlated, all group tensors have strong
low-rank property, leading to an improvement of recovery performance.
Experimental results demonstrate that the proposed NN-TRPCA outperforms
existing TRPCA methods in visual data recovery. The demo code is available at
https://github.com/qguo2010/NN-TRPCA.
Related papers
- Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - Towards Practical Control of Singular Values of Convolutional Layers [65.25070864775793]
Convolutional neural networks (CNNs) are easy to train, but their essential properties, such as generalization error and adversarial robustness, are hard to control.
Recent research demonstrated that singular values of convolutional layers significantly affect such elusive properties.
We offer a principled approach to alleviating constraints of the prior art at the expense of an insignificant reduction in layer expressivity.
arXiv Detail & Related papers (2022-11-24T19:09:44Z) - Fast and Provable Tensor Robust Principal Component Analysis via Scaled
Gradient Descent [30.299284742925852]
This paper tackles tensor robust principal component analysis (RPCA)
It aims to recover a low-rank tensor from its observations contaminated by sparse corruptions.
We show that the proposed algorithm achieves better and more scalable performance than state-of-the-art matrix and tensor RPCA algorithms.
arXiv Detail & Related papers (2022-06-18T04:01:32Z) - Bayesian Robust Tensor Ring Model for Incomplete Multiway Data [7.765112574724006]
Low-rank tensor completion aims to recover missing entries from the observed data.
In this paper, we propose a robust tensor ring (BRTR) decomposition method for RTC problem.
Experiments indicate that BRTR has better recovery performance and ability to remove noise than other state-of-the-art methods.
arXiv Detail & Related papers (2022-02-27T09:25:24Z) - Efficient Tensor Robust PCA under Hybrid Model of Tucker and Tensor
Train [33.33426557160802]
We propose an efficient principal component analysis (TRPCA) under hybrid model of Tucker and TT.
Specifically, in theory we reveal that TT nuclear norm (TTNN) of the original big tensor can be equivalently converted to that of a much smaller tensor via a Tucker compression format.
Numerical experiments on both synthetic and real-world tensor data verify the superiority of the proposed model.
arXiv Detail & Related papers (2021-12-20T01:15:45Z) - MTC: Multiresolution Tensor Completion from Partial and Coarse
Observations [49.931849672492305]
Existing completion formulation mostly relies on partial observations from a single tensor.
We propose an efficient Multi-resolution Completion model (MTC) to solve the problem.
arXiv Detail & Related papers (2021-06-14T02:20:03Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Enhanced Principal Component Analysis under A Collaborative-Robust
Framework [89.28334359066258]
We introduce a general collaborative-robust weight learning framework that combines weight learning and robust loss in a non-trivial way.
Under the proposed framework, only a part of well-fitting samples are activated which indicates more importance during training, and others, whose errors are large, will not be ignored.
In particular, the negative effects of inactivated samples are alleviated by the robust loss function.
arXiv Detail & Related papers (2021-03-22T15:17:37Z) - Graph Regularized Nonnegative Tensor Ring Decomposition for Multiway
Representation Learning [38.70369173200596]
Nonnegative tensor ring (NTR) decomposition and graph regularized NTR (GNTR) decomposition are proposed.
The proposed algorithms can extract parts-based basis with rich colors and rich lines from tensor objects that provide more interpretable and meaningful representation.
arXiv Detail & Related papers (2020-10-12T12:54:20Z) - TRP: Trained Rank Pruning for Efficient Deep Neural Networks [69.06699632822514]
We propose Trained Rank Pruning (TRP), which alternates between low rank approximation and training.
A nuclear regularization optimized by sub-gradient descent is utilized to further promote low rank in TRP.
The TRP trained network inherently has a low-rank structure, and is approximated with negligible performance loss.
arXiv Detail & Related papers (2020-04-30T03:37:36Z) - Sparse and Low-Rank High-Order Tensor Regression via Parallel Proximal
Method [6.381138694845438]
We propose the Sparse and Low-rank Regression model for large-scale data with high-order structures.
Our model enforces sparsity and low-rankness of the tensor coefficient.
Our model's predictions exhibit meaningful interpretations on the video dataset.
arXiv Detail & Related papers (2019-11-29T06:25:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.