Low-rank Matrix Recovery With Unknown Correspondence
- URL: http://arxiv.org/abs/2110.07959v2
- Date: Mon, 18 Oct 2021 03:10:41 GMT
- Title: Low-rank Matrix Recovery With Unknown Correspondence
- Authors: Zhiwei Tang, Tsung-Hui Chang, Xiaojing Ye, Hongyuan Zha
- Abstract summary: We show that it is possible to recover $M$ via solving a nuclear norm minimization problem under a proper low-rank condition on $M$, with provable non-asymptotic error bound for the recovery of $M$.
Experiments on simulated data, the MovieLens 100K dataset and Yale B database show that $textM3textO achieves state-of-the-art performance over several baselines and can recover the ground-truth correspondence with high accuracy.
- Score: 62.634051913953485
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study a matrix recovery problem with unknown correspondence: given the
observation matrix $M_o=[A,\tilde P B]$, where $\tilde P$ is an unknown
permutation matrix, we aim to recover the underlying matrix $M=[A,B]$. Such
problem commonly arises in many applications where heterogeneous data are
utilized and the correspondence among them are unknown, e.g., due to privacy
concerns. We show that it is possible to recover $M$ via solving a nuclear norm
minimization problem under a proper low-rank condition on $M$, with provable
non-asymptotic error bound for the recovery of $M$. We propose an algorithm,
$\text{M}^3\text{O}$ (Matrix recovery via Min-Max Optimization) which recasts
this combinatorial problem as a continuous minimax optimization problem and
solves it by proximal gradient with a Max-Oracle. $\text{M}^3\text{O}$ can also
be applied to a more general scenario where we have missing entries in $M_o$
and multiple groups of data with distinct unknown correspondence. Experiments
on simulated data, the MovieLens 100K dataset and Yale B database show that
$\text{M}^3\text{O}$ achieves state-of-the-art performance over several
baselines and can recover the ground-truth correspondence with high accuracy.
Related papers
- The Communication Complexity of Approximating Matrix Rank [50.6867896228563]
We show that this problem has randomized communication complexity $Omega(frac1kcdot n2log|mathbbF|)$.
As an application, we obtain an $Omega(frac1kcdot n2log|mathbbF|)$ space lower bound for any streaming algorithm with $k$ passes.
arXiv Detail & Related papers (2024-10-26T06:21:42Z) - Optimal level set estimation for non-parametric tournament and crowdsourcing problems [49.75262185577198]
Motivated by crowdsourcing, we consider a problem where we partially observe the correctness of the answers of $n$ experts on $d$ questions.
In this paper, we assume that the matrix $M$ containing the probability that expert $i$ answers correctly to question $j$ is bi-isotonic up to a permutation of it rows and columns.
We construct an efficient-time algorithm that turns out to be minimax optimal for this classification problem.
arXiv Detail & Related papers (2024-08-27T18:28:31Z) - One-sided Matrix Completion from Two Observations Per Row [95.87811229292056]
We propose a natural algorithm that involves imputing the missing values of the matrix $XTX$.
We evaluate our algorithm on one-sided recovery of synthetic data and low-coverage genome sequencing.
arXiv Detail & Related papers (2023-06-06T22:35:16Z) - Optimal Query Complexities for Dynamic Trace Estimation [59.032228008383484]
We consider the problem of minimizing the number of matrix-vector queries needed for accurate trace estimation in the dynamic setting where our underlying matrix is changing slowly.
We provide a novel binary tree summation procedure that simultaneously estimates all $m$ traces up to $epsilon$ error with $delta$ failure probability.
Our lower bounds (1) give the first tight bounds for Hutchinson's estimator in the matrix-vector product model with Frobenius norm error even in the static setting, and (2) are the first unconditional lower bounds for dynamic trace estimation.
arXiv Detail & Related papers (2022-09-30T04:15:44Z) - Unique sparse decomposition of low rank matrices [17.037882881652617]
We find a unique decomposition of a low rank matrixYin mathbbRrtimes n$.
We prove that up to some $Yin mathRrtimes n$ is a sparse-wise decomposition of $Xin mathbbRrtimes n$.
arXiv Detail & Related papers (2021-06-14T20:05:59Z) - Non-Convex Compressed Sensing with Training Data [0.0]
We find the solution of the original problem $Ax = b$ with high probability in the range of a one layer linear neural network with comparatively few assumptions on the matrix $A$.
In this paper, we consider an alternative, where instead of suitable initial values we are provided with extra training problems $Ax = B_l$, $l=1, dots, p$ that are related to our compressed sensing problem.
arXiv Detail & Related papers (2021-01-20T20:30:59Z) - The Average-Case Time Complexity of Certifying the Restricted Isometry
Property [66.65353643599899]
In compressed sensing, the restricted isometry property (RIP) on $M times N$ sensing matrices guarantees efficient reconstruction of sparse vectors.
We investigate the exact average-case time complexity of certifying the RIP property for $Mtimes N$ matrices with i.i.d. $mathcalN(0,1/M)$ entries.
arXiv Detail & Related papers (2020-05-22T16:55:01Z) - Optimal Exact Matrix Completion Under new Parametrization [0.0]
We study the problem of exact completion for $m times n$ sized matrix of rank $r$ with the adaptive sampling method.
We propose matrix completion algorithms that exactly recovers the target matrix.
arXiv Detail & Related papers (2020-02-06T18:31:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.