Global Weighted Tensor Nuclear Norm for Tensor Robust Principal
Component Analysis
- URL: http://arxiv.org/abs/2209.14084v1
- Date: Wed, 28 Sep 2022 13:27:10 GMT
- Title: Global Weighted Tensor Nuclear Norm for Tensor Robust Principal
Component Analysis
- Authors: Libin Wang, Yulong Wang, Shiyuan Wang, Youheng Liu, Yutao Hu, Longlong
Chen, Hong Chen
- Abstract summary: This paper develops a new Global Weighted TRPCA method (GWTRPCA)
It is the first approach simultaneously considers the significance of intra-frontal slice and inter-frontal slice singular values in the Fourier domain.
Exploiting this global information, GWTRPCA penalizes the larger singular values less and assigns smaller weights to them.
- Score: 25.848106663205865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tensor Robust Principal Component Analysis (TRPCA), which aims to recover a
low-rank tensor corrupted by sparse noise, has attracted much attention in many
real applications. This paper develops a new Global Weighted TRPCA method
(GWTRPCA), which is the first approach simultaneously considers the
significance of intra-frontal slice and inter-frontal slice singular values in
the Fourier domain. Exploiting this global information, GWTRPCA penalizes the
larger singular values less and assigns smaller weights to them. Hence, our
method can recover the low-tubal-rank components more exactly. Moreover, we
propose an effective adaptive weight learning strategy by a Modified Cauchy
Estimator (MCE) since the weight setting plays a crucial role in the success of
GWTRPCA. To implement the GWTRPCA method, we devise an optimization algorithm
using an Alternating Direction Method of Multipliers (ADMM) method. Experiments
on real-world datasets validate the effectiveness of our proposed method.
Related papers
- Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.
To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.
Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - Policy Gradient with Active Importance Sampling [55.112959067035916]
Policy gradient (PG) methods significantly benefit from IS, enabling the effective reuse of previously collected samples.
However, IS is employed in RL as a passive tool for re-weighting historical samples.
We look for the best behavioral policy from which to collect samples to reduce the policy gradient variance.
arXiv Detail & Related papers (2024-05-09T09:08:09Z) - Finite-Time Convergence and Sample Complexity of Actor-Critic Multi-Objective Reinforcement Learning [20.491176017183044]
This paper tackles the multi-objective reinforcement learning (MORL) problem.
It introduces an innovative actor-critic algorithm named MOAC which finds a policy by iteratively making trade-offs among conflicting reward signals.
arXiv Detail & Related papers (2024-05-05T23:52:57Z) - Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - Align-DETR: Improving DETR with Simple IoU-aware BCE loss [32.13866392998818]
We propose a metric, recall of best-regressed samples, to quantitively evaluate the misalignment problem.
The proposed loss, IA-BCE, guides the training of DETR to build a strong correlation between classification score and localization precision.
To overcome the dramatic decrease in sample quality induced by the sparsity of queries, we introduce a prime sample weighting mechanism.
arXiv Detail & Related papers (2023-04-15T10:24:51Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - Sample Dropout: A Simple yet Effective Variance Reduction Technique in
Deep Policy Optimization [18.627233013208834]
We show that the use of importance sampling could introduce high variance in the objective estimate.
We propose a technique called sample dropout to bound the estimation variance by dropping out samples when their ratio deviation is too high.
arXiv Detail & Related papers (2023-02-05T04:44:35Z) - Robust Sample Weighting to Facilitate Individualized Treatment Rule
Learning for a Target Population [6.1210839791227745]
Learning individualized treatment rules (ITRs) is an important topic in precision medicine.
We develop a weighting framework to mitigate the impact of misspecification on optimal ITRs from a source population to a target population.
Our method can greatly improve ITR estimation for the target population compared with other weighting methods.
arXiv Detail & Related papers (2021-05-03T00:05:18Z) - Stochastic Optimization of Areas Under Precision-Recall Curves with
Provable Convergence [66.83161885378192]
Area under ROC (AUROC) and precision-recall curves (AUPRC) are common metrics for evaluating classification performance for imbalanced problems.
We propose a technical method to optimize AUPRC for deep learning.
arXiv Detail & Related papers (2021-04-18T06:22:21Z) - Attentional-Biased Stochastic Gradient Descent [74.49926199036481]
We present a provable method (named ABSGD) for addressing the data imbalance or label noise problem in deep learning.
Our method is a simple modification to momentum SGD where we assign an individual importance weight to each sample in the mini-batch.
ABSGD is flexible enough to combine with other robust losses without any additional cost.
arXiv Detail & Related papers (2020-12-13T03:41:52Z) - Unsupervised learning of disentangled representations in deep restricted
kernel machines with orthogonality constraints [15.296955630621566]
Constr-DRKM is a deep kernel method for the unsupervised learning of disentangled data representations.
We quantitatively evaluate the proposed method's effectiveness in disentangled feature learning.
arXiv Detail & Related papers (2020-11-25T11:40:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.