Adaptive Noisy Matrix Completion
- URL: http://arxiv.org/abs/2203.08340v1
- Date: Wed, 16 Mar 2022 01:20:18 GMT
- Title: Adaptive Noisy Matrix Completion
- Authors: Ilqar Ramazanli
- Abstract summary: We assume that the matrix $mathbfM$ we target to recover is composed as low-rank matrix with addition of bounded small noise.
We study this problem in adaptive setting that, we continuously estimate an upper bound for the angle with the underlying low-rank subspace and noise-added subspace.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Low-rank matrix completion has been studied extensively under various type of
categories. The problem could be categorized as noisy completion or exact
completion, also active or passive completion algorithms. In this paper we
focus on adaptive matrix completion with bounded type of noise. We assume that
the matrix $\mathbf{M}$ we target to recover is composed as low-rank matrix
with addition of bounded small noise. The problem has been previously studied
by \cite{nina}, in a fixed sampling model. Here, we study this problem in
adaptive setting that, we continuously estimate an upper bound for the angle
with the underlying low-rank subspace and noise-added subspace. Moreover, the
method suggested here, could be shown requires much smaller observation than
aforementioned method.
Related papers
- Information limits and Thouless-Anderson-Palmer equations for spiked matrix models with structured noise [19.496063739638924]
We consider a saturate problem of Bayesian inference for a structured spiked model.
We show how to predict the statistical limits using an efficient algorithm inspired by the theory of adaptive Thouless-Anderson-Palmer equations.
arXiv Detail & Related papers (2024-05-31T16:38:35Z) - Borda Regret Minimization for Generalized Linear Dueling Bandits [65.09919504862496]
We study the Borda regret minimization problem for dueling bandits, which aims to identify the item with the highest Borda score.
We propose a rich class of generalized linear dueling bandit models, which cover many existing models.
Our algorithm achieves an $tildeO(d2/3 T2/3)$ regret, which is also optimal.
arXiv Detail & Related papers (2023-03-15T17:59:27Z) - Partial Matrix Completion [29.68420094716923]
This work establishes a new framework of partial matrix completion.
The goal is to identify a large subset of the entries that can be completed with high confidence.
We propose an efficient algorithm with the following provable guarantees.
arXiv Detail & Related papers (2022-08-25T12:47:20Z) - A gradient estimator via L1-randomization for online zero-order
optimization with two point feedback [93.57603470949266]
We present a novel gradient estimator based on two function evaluation and randomization.
We consider two types of assumptions on the noise of the zero-order oracle: canceling noise and adversarial noise.
We provide an anytime and completely data-driven algorithm, which is adaptive to all parameters of the problem.
arXiv Detail & Related papers (2022-05-27T11:23:57Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - Matrix Completion with Sparse Noisy Rows [3.42658286826597]
We study exact low-rank completion under non-degenerate noise model.
In this paper, we assume that each row can receive random noise instead of columns.
arXiv Detail & Related papers (2022-04-01T05:45:43Z) - Survey of Matrix Completion Algorithms [3.42658286826597]
Matrix completion problem has been investigated under many different conditions since Netflix announced the Netflix Prize problem.
We are going to visit some of the matrix completion methods, mainly in the direction of passive and adaptive directions.
arXiv Detail & Related papers (2022-04-01T05:44:47Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Non-Adaptive Adaptive Sampling on Turnstile Streams [57.619901304728366]
We give the first relative-error algorithms for column subset selection, subspace approximation, projective clustering, and volume on turnstile streams that use space sublinear in $n$.
Our adaptive sampling procedure has a number of applications to various data summarization problems that either improve state-of-the-art or have only been previously studied in the more relaxed row-arrival model.
arXiv Detail & Related papers (2020-04-23T05:00:21Z) - Optimal Exact Matrix Completion Under new Parametrization [0.0]
We study the problem of exact completion for $m times n$ sized matrix of rank $r$ with the adaptive sampling method.
We propose matrix completion algorithms that exactly recovers the target matrix.
arXiv Detail & Related papers (2020-02-06T18:31:47Z) - Contextual Linear Bandits under Noisy Features: Towards Bayesian Oracles [65.9694455739978]
We study contextual linear bandit problems under feature uncertainty, where the features are noisy and have missing entries.
Our analysis reveals that the optimal hypothesis can significantly deviate from the underlying realizability function, depending on the noise characteristics.
This implies that classical approaches cannot guarantee a non-trivial regret bound.
arXiv Detail & Related papers (2017-03-03T21:39:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.