Global Weighted Tensor Nuclear Norm for Tensor Robust Principal
Component Analysis
- URL: http://arxiv.org/abs/2209.14084v1
- Date: Wed, 28 Sep 2022 13:27:10 GMT
- Title: Global Weighted Tensor Nuclear Norm for Tensor Robust Principal
Component Analysis
- Authors: Libin Wang, Yulong Wang, Shiyuan Wang, Youheng Liu, Yutao Hu, Longlong
Chen, Hong Chen
- Abstract summary: This paper develops a new Global Weighted TRPCA method (GWTRPCA)
It is the first approach simultaneously considers the significance of intra-frontal slice and inter-frontal slice singular values in the Fourier domain.
Exploiting this global information, GWTRPCA penalizes the larger singular values less and assigns smaller weights to them.
- Score: 25.848106663205865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tensor Robust Principal Component Analysis (TRPCA), which aims to recover a
low-rank tensor corrupted by sparse noise, has attracted much attention in many
real applications. This paper develops a new Global Weighted TRPCA method
(GWTRPCA), which is the first approach simultaneously considers the
significance of intra-frontal slice and inter-frontal slice singular values in
the Fourier domain. Exploiting this global information, GWTRPCA penalizes the
larger singular values less and assigns smaller weights to them. Hence, our
method can recover the low-tubal-rank components more exactly. Moreover, we
propose an effective adaptive weight learning strategy by a Modified Cauchy
Estimator (MCE) since the weight setting plays a crucial role in the success of
GWTRPCA. To implement the GWTRPCA method, we devise an optimization algorithm
using an Alternating Direction Method of Multipliers (ADMM) method. Experiments
on real-world datasets validate the effectiveness of our proposed method.
Related papers
- Learnable Scaled Gradient Descent for Guaranteed Robust Tensor PCA [39.084456109467204]
We propose an efficient scaled gradient descent (SGD) approach within the t-SVD framework for the first time.
We show that RTPCA-SGD achieves linear convergence to the true low-rank tensor at a constant rate, independent of the condition number.
arXiv Detail & Related papers (2025-01-08T15:25:19Z) - Alternating minimization for square root principal component pursuit [2.449191760736501]
We develop efficient algorithms for solving the square root principal component pursuit (SRPCP) problem.
Specifically, we propose a tuning-free alternating minimization (AltMin) algorithm, where each iteration involves subproblems enjoying closed-form optimal solutions.
We introduce techniques based on the variational formulation of the nuclear norm and Burer-Monteiro decomposition to further accelerate the AltMin method.
arXiv Detail & Related papers (2024-12-31T14:43:50Z) - Re-evaluating Group Robustness via Adaptive Class-Specific Scaling [47.41034887474166]
Group distributionally robust optimization is a prominent algorithm used to mitigate spurious correlations and address dataset bias.
Existing approaches have reported improvements in robust accuracies but come at the cost of average accuracy due to inherent trade-offs.
We propose a class-specific scaling strategy, directly applicable to existing debiasing algorithms with no additional training.
We develop an instance-wise adaptive scaling technique to alleviate this trade-off, even leading to improvements in both robust and average accuracies.
arXiv Detail & Related papers (2024-12-19T16:01:51Z) - Robust PCA Based on Adaptive Weighted Least Squares and Low-Rank Matrix Factorization [2.983818075226378]
We propose a novel RPCA model that integrates adaptive weight factor update during initial component instability.
Our method outperforms existing non-inspired regularization approaches, offering superior performance and efficiency.
arXiv Detail & Related papers (2024-12-19T08:31:42Z) - Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.
To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.
Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - Policy Gradient with Active Importance Sampling [55.112959067035916]
Policy gradient (PG) methods significantly benefit from IS, enabling the effective reuse of previously collected samples.
However, IS is employed in RL as a passive tool for re-weighting historical samples.
We look for the best behavioral policy from which to collect samples to reduce the policy gradient variance.
arXiv Detail & Related papers (2024-05-09T09:08:09Z) - Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - Stochastic Optimization of Areas Under Precision-Recall Curves with
Provable Convergence [66.83161885378192]
Area under ROC (AUROC) and precision-recall curves (AUPRC) are common metrics for evaluating classification performance for imbalanced problems.
We propose a technical method to optimize AUPRC for deep learning.
arXiv Detail & Related papers (2021-04-18T06:22:21Z) - Attentional-Biased Stochastic Gradient Descent [74.49926199036481]
We present a provable method (named ABSGD) for addressing the data imbalance or label noise problem in deep learning.
Our method is a simple modification to momentum SGD where we assign an individual importance weight to each sample in the mini-batch.
ABSGD is flexible enough to combine with other robust losses without any additional cost.
arXiv Detail & Related papers (2020-12-13T03:41:52Z) - Unsupervised learning of disentangled representations in deep restricted
kernel machines with orthogonality constraints [15.296955630621566]
Constr-DRKM is a deep kernel method for the unsupervised learning of disentangled data representations.
We quantitatively evaluate the proposed method's effectiveness in disentangled feature learning.
arXiv Detail & Related papers (2020-11-25T11:40:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.