Adversarial Robust Low Rank Matrix Estimation: Compressed Sensing and Matrix Completion
- URL: http://arxiv.org/abs/2010.13018v5
- Date: Fri, 24 May 2024 07:44:50 GMT
- Title: Adversarial Robust Low Rank Matrix Estimation: Compressed Sensing and Matrix Completion
- Authors: Takeyuki Sasai, Hironori Fujisawa,
- Abstract summary: We deal with matrix compressed sensing, including lasso as a partial problem, and matrix completion.
We propose a simple unified approach based on a combination of the Huber loss function and the nuclear norm penalization.
- Score: 2.0257616108612373
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider robust low rank matrix estimation as a trace regression when outputs are contaminated by adversaries. The adversaries are allowed to add arbitrary values to arbitrary outputs. Such values can depend on any samples. We deal with matrix compressed sensing, including lasso as a partial problem, and matrix completion, and then we obtain sharp estimation error bounds. To obtain the error bounds for different models such as matrix compressed sensing and matrix completion, we propose a simple unified approach based on a combination of the Huber loss function and the nuclear norm penalization, which is a different approach from the conventional ones. Some error bounds obtained in the present paper are sharper than the past ones.
Related papers
- Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - Rank Overspecified Robust Matrix Recovery: Subgradient Method and Exact
Recovery [37.05862765171643]
We consider a robust factorization of a low-rank matrix with no prior knowledge on the rank.
We show that our particular design of diminishing the size of the matrix effectively prevents overfitting under overized models.
arXiv Detail & Related papers (2021-09-23T05:54:46Z) - Meta-learning for Matrix Factorization without Shared Rows or Columns [39.56814839510978]
The proposed method uses a neural network that takes a matrix as input, and generates prior distributions of factorized matrices of the given matrix.
The neural network is meta-learned such that the expected imputation error is minimized.
In our experiments with three user-item rating datasets, we demonstrate that our proposed method can impute the missing values from a limited number of observations in unseen matrices.
arXiv Detail & Related papers (2021-06-29T07:40:20Z) - Adversarially-Trained Nonnegative Matrix Factorization [77.34726150561087]
We consider an adversarially-trained version of the nonnegative matrix factorization.
In our formulation, an attacker adds an arbitrary matrix of bounded norm to the given data matrix.
We design efficient algorithms inspired by adversarial training to optimize for dictionary and coefficient matrices.
arXiv Detail & Related papers (2021-04-10T13:13:17Z) - Extension of Saaty's inconsistency index to incomplete comparisons:
Approximated thresholds [0.0]
This paper generalises the inconsistency index proposed by Saaty to incomplete pairwise comparison matrices.
The extension is based on the approach of filling the missing elements to minimise the eigenvalue of the incomplete matrix.
Our results can be used by practitioners as a statistical criterion for accepting/rejecting an incomplete pairwise comparison matrix.
arXiv Detail & Related papers (2021-02-21T08:39:37Z) - Robust Low-rank Matrix Completion via an Alternating Manifold Proximal
Gradient Continuation Method [47.80060761046752]
Robust low-rank matrix completion (RMC) has been studied extensively for computer vision, signal processing and machine learning applications.
This problem aims to decompose a partially observed matrix into the superposition of a low-rank matrix and a sparse matrix, where the sparse matrix captures the grossly corrupted entries of the matrix.
A widely used approach to tackle RMC is to consider a convex formulation, which minimizes the nuclear norm of the low-rank matrix (to promote low-rankness) and the l1 norm of the sparse matrix (to promote sparsity).
In this paper, motivated by some recent works on low-
arXiv Detail & Related papers (2020-08-18T04:46:22Z) - A Scalable, Adaptive and Sound Nonconvex Regularizer for Low-rank Matrix
Completion [60.52730146391456]
We propose a new non scalable low-rank regularizer called "nuclear Frobenius norm" regularizer, which is adaptive and sound.
It bypasses the computation of singular values and allows fast optimization by algorithms.
It obtains state-of-the-art recovery performance while being the fastest in existing matrix learning methods.
arXiv Detail & Related papers (2020-08-14T18:47:58Z) - Robust Compressed Sensing using Generative Models [98.64228459705859]
In this paper we propose an algorithm inspired by the Median-of-Means (MOM)
Our algorithm guarantees recovery for heavy-tailed data, even in the presence of outliers.
arXiv Detail & Related papers (2020-06-16T19:07:41Z) - Relative Error Bound Analysis for Nuclear Norm Regularized Matrix Completion [101.83262280224729]
We develop a relative error bound for nuclear norm regularized matrix completion.
We derive a relative upper bound for recovering the best low-rank approximation of the unknown matrix.
arXiv Detail & Related papers (2015-04-26T13:12:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.