Deep Learning Approach for Matrix Completion Using Manifold Learning
- URL: http://arxiv.org/abs/2012.06063v1
- Date: Fri, 11 Dec 2020 01:01:54 GMT
- Title: Deep Learning Approach for Matrix Completion Using Manifold Learning
- Authors: Saeid Mehrdad, Mohammad Hossein Kahaei
- Abstract summary: This paper introduces a new latent variables model for data matrix which is a combination of linear and nonlinear models.
We design a novel deep-neural-network-based matrix completion algorithm to address both linear and nonlinear relations among entries of data matrix.
- Score: 3.04585143845864
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Matrix completion has received vast amount of attention and research due to
its wide applications in various study fields. Existing methods of matrix
completion consider only nonlinear (or linear) relations among entries in a
data matrix and ignore linear (or nonlinear) relationships latent. This paper
introduces a new latent variables model for data matrix which is a combination
of linear and nonlinear models and designs a novel deep-neural-network-based
matrix completion algorithm to address both linear and nonlinear relations
among entries of data matrix. The proposed method consists of two branches. The
first branch learns the latent representations of columns and reconstructs the
columns of the partially observed matrix through a series of hidden neural
network layers. The second branch does the same for the rows. In addition,
based on multi-task learning principles, we enforce these two branches work
together and introduce a new regularization technique to reduce over-fitting.
More specifically, the missing entries of data are recovered as a main task and
manifold learning is performed as an auxiliary task. The auxiliary task
constrains the weights of the network so it can be considered as a regularizer,
improving the main task and reducing over-fitting. Experimental results
obtained on the synthetic data and several real-world data verify the
effectiveness of the proposed method compared with state-of-the-art matrix
completion methods.
Related papers
- Fast Dual-Regularized Autoencoder for Sparse Biological Data [65.268245109828]
We develop a shallow autoencoder for the dual neighborhood-regularized matrix completion problem.
We demonstrate the speed and accuracy advantage of our approach over the existing state-of-the-art in predicting drug-target interactions and drug-disease associations.
arXiv Detail & Related papers (2024-01-30T01:28:48Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Transductive Matrix Completion with Calibration for Multi-Task Learning [3.7660066212240757]
We propose a transductive matrix completion algorithm that incorporates a calibration constraint for the features under the multi-task learning framework.
The proposed algorithm recovers the incomplete feature matrix and target matrix simultaneously.
Several synthetic data experiments are conducted, which show the proposed algorithm out-performs other existing methods.
arXiv Detail & Related papers (2023-02-20T08:47:23Z) - Bayesian Low-rank Matrix Completion with Dual-graph Embedding: Prior
Analysis and Tuning-free Inference [16.82986562533071]
We propose a novel Bayesian learning algorithm that automatically learns the hyper- parameters associated with dual-graph regularization.
A novel prior is devised to promote the low-rankness of the matrix and encode the dual-graph information simultaneously.
Experiments using synthetic and real-world datasets demonstrate the state-of-the-art performance of the proposed learning algorithm.
arXiv Detail & Related papers (2022-03-18T16:38:30Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Meta-learning for Matrix Factorization without Shared Rows or Columns [39.56814839510978]
The proposed method uses a neural network that takes a matrix as input, and generates prior distributions of factorized matrices of the given matrix.
The neural network is meta-learned such that the expected imputation error is minimized.
In our experiments with three user-item rating datasets, we demonstrate that our proposed method can impute the missing values from a limited number of observations in unseen matrices.
arXiv Detail & Related papers (2021-06-29T07:40:20Z) - Nonparametric Trace Regression in High Dimensions via Sign Series
Representation [13.37650464374017]
We develop a framework for nonparametric trace regression models via structured sign series representations of high dimensional functions.
In the context of matrix completion, our framework leads to a substantially richer model based on what we coin as the "sign rank" of a matrix.
arXiv Detail & Related papers (2021-05-04T22:20:00Z) - Deep Two-way Matrix Reordering for Relational Data Analysis [41.60125423028092]
Matrix reordering is a task to permute rows and columns of a given observed matrix.
We propose a new matrix reordering method, Deep Two-way Matrix Reordering (DeepTMR), using a neural network model.
We demonstrate the effectiveness of proposed DeepTMR by applying it to both synthetic and practical data sets.
arXiv Detail & Related papers (2021-03-26T01:31:24Z) - Learning Mixtures of Low-Rank Models [89.39877968115833]
We study the problem of learning computational mixtures of low-rank models.
We develop an algorithm that is guaranteed to recover the unknown matrices with near-optimal sample.
In addition, the proposed algorithm is provably stable against random noise.
arXiv Detail & Related papers (2020-09-23T17:53:48Z) - Eigendecomposition-Free Training of Deep Networks for Linear
Least-Square Problems [107.3868459697569]
We introduce an eigendecomposition-free approach to training a deep network.
We show that our approach is much more robust than explicit differentiation of the eigendecomposition.
Our method has better convergence properties and yields state-of-the-art results.
arXiv Detail & Related papers (2020-04-15T04:29:34Z) - Provable Meta-Learning of Linear Representations [114.656572506859]
We provide fast, sample-efficient algorithms to address the dual challenges of learning a common set of features from multiple, related tasks, and transferring this knowledge to new, unseen tasks.
We also provide information-theoretic lower bounds on the sample complexity of learning these linear features.
arXiv Detail & Related papers (2020-02-26T18:21:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.