Generalization Bounds for Inductive Matrix Completion in Low-noise
Settings
- URL: http://arxiv.org/abs/2212.08339v1
- Date: Fri, 16 Dec 2022 08:30:41 GMT
- Title: Generalization Bounds for Inductive Matrix Completion in Low-noise
Settings
- Authors: Antoine Ledent, Rodrigo Alves, Yunwen Lei, Yann Guermeur and Marius
Kloft
- Abstract summary: We study inductive matrix completion (matrix completion with side information) under an i.i.d. subgaussian noise assumption.
We obtain for the first time generalization bounds with the following three properties.
- Score: 46.82705757568271
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study inductive matrix completion (matrix completion with side
information) under an i.i.d. subgaussian noise assumption at a low noise
regime, with uniform sampling of the entries. We obtain for the first time
generalization bounds with the following three properties: (1) they scale like
the standard deviation of the noise and in particular approach zero in the
exact recovery case; (2) even in the presence of noise, they converge to zero
when the sample size approaches infinity; and (3) for a fixed dimension of the
side information, they only have a logarithmic dependence on the size of the
matrix. Differently from many works in approximate recovery, we present results
both for bounded Lipschitz losses and for the absolute loss, with the latter
relying on Talagrand-type inequalities. The proofs create a bridge between two
approaches to the theoretical analysis of matrix completion, since they consist
in a combination of techniques from both the exact recovery literature and the
approximate recovery literature.
Related papers
- Non-convex matrix sensing: Breaking the quadratic rank barrier in the sample complexity [11.412228884390784]
We study the problem of reconstructing a low-rank quadratic convex matrix from a few measurements.
We show that factorized gradient with spectral specificity converges to truth with the number of samples.
arXiv Detail & Related papers (2024-08-20T14:09:28Z) - Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - Matrix Denoising with Doubly Heteroscedastic Noise: Fundamental Limits and Optimal Spectral Methods [24.06775799553418]
We study the matrix denoising problem of estimating the singular vectors of a rank-$1$ signal corrupted by noise with both column and row correlations.
Our work establishes the information-theoretic and algorithmic limits of matrix denoising with doubly heteroscedastic noise.
arXiv Detail & Related papers (2024-05-22T18:38:10Z) - Concentration properties of fractional posterior in 1-bit matrix completion [0.0]
This work specifically addresses the scenario of binary observations, often termed as 1-bit matrix completion.
We tackle this gap by considering a general, non-uniform sampling scheme and providing theoretical assurances on the efficacy of the fractional posterior.
Our results are comparable to those found in the frequentist literature, yet demand fewer restrictive assumptions.
arXiv Detail & Related papers (2024-04-13T11:22:53Z) - Critical Points and Convergence Analysis of Generative Deep Linear
Networks Trained with Bures-Wasserstein Loss [2.294014185517203]
We consider a deep matrix factorization model of covariance matrices trained with the Bures-Wasserstein distance.
We characterize the critical points and minimizers of the Bures-Wasserstein distance over the space of rank-bounded matrices.
We establish convergence results for gradient flow using a smooth perturbative version of the loss and convergence results for finite step size gradient descent.
arXiv Detail & Related papers (2023-03-06T10:56:14Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - Global Convergence of Sub-gradient Method for Robust Matrix Recovery:
Small Initialization, Noisy Measurements, and Over-parameterization [4.7464518249313805]
Sub-gradient method (SubGM) is used to recover a low-rank matrix from a limited number of measurements.
We show that SubGM converges to the true solution, even under arbitrarily large and arbitrarily dense noise values.
arXiv Detail & Related papers (2022-02-17T17:50:04Z) - Learning Mixtures of Low-Rank Models [89.39877968115833]
We study the problem of learning computational mixtures of low-rank models.
We develop an algorithm that is guaranteed to recover the unknown matrices with near-optimal sample.
In addition, the proposed algorithm is provably stable against random noise.
arXiv Detail & Related papers (2020-09-23T17:53:48Z) - Sample Complexity Bounds for 1-bit Compressive Sensing and Binary Stable
Embeddings with Generative Priors [52.06292503723978]
Motivated by advances in compressive sensing with generative models, we study the problem of 1-bit compressive sensing with generative models.
We first consider noiseless 1-bit measurements, and provide sample complexity bounds for approximate recovery under i.i.d.Gaussian measurements.
We demonstrate that the Binary $epsilon$-Stable Embedding property, which characterizes the robustness of the reconstruction to measurement errors and noise, also holds for 1-bit compressive sensing with Lipschitz continuous generative models.
arXiv Detail & Related papers (2020-02-05T09:44:10Z) - Relative Error Bound Analysis for Nuclear Norm Regularized Matrix Completion [101.83262280224729]
We develop a relative error bound for nuclear norm regularized matrix completion.
We derive a relative upper bound for recovering the best low-rank approximation of the unknown matrix.
arXiv Detail & Related papers (2015-04-26T13:12:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.