Generalized Low-Rank Matrix Completion Model with Overlapping Group Error Representation
- URL: http://arxiv.org/abs/2407.08517v2
- Date: Sun, 21 Jul 2024 02:43:43 GMT
- Title: Generalized Low-Rank Matrix Completion Model with Overlapping Group Error Representation
- Authors: Wenjing Lu, Zhuang Fang, Liang Wu, Liming Tang, Hanxin Liu, Chuanjiang He,
- Abstract summary: The low-rank matrix completion (LRMC) technology has achieved remarkable results in low-level visual tasks.
There is an underlying assumption that the real-world matrix data is low-rank in LRMC.
The real matrix data does not satisfy the strict low-rank property, which undoubtedly present serious challenges for the above-mentioned matrix recovery methods.
- Score: 3.457484690890009
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The low-rank matrix completion (LRMC) technology has achieved remarkable results in low-level visual tasks. There is an underlying assumption that the real-world matrix data is low-rank in LRMC. However, the real matrix data does not satisfy the strict low-rank property, which undoubtedly present serious challenges for the above-mentioned matrix recovery methods. Fortunately, there are feasible schemes that devise appropriate and effective priori representations for describing the intrinsic information of real data. In this paper, we firstly model the matrix data ${\bf{Y}}$ as the sum of a low-rank approximation component $\bf{X}$ and an approximation error component $\cal{E}$. This finer-grained data decomposition architecture enables each component of information to be portrayed more precisely. Further, we design an overlapping group error representation (OGER) function to characterize the above error structure and propose a generalized low-rank matrix completion model based on OGER. Specifically, the low-rank component describes the global structure information of matrix data, while the OGER component not only compensates for the approximation error between the low-rank component and the real data but also better captures the local block sparsity information of matrix data. Finally, we develop an alternating direction method of multipliers (ADMM) that integrates the majorization-minimization (MM) algorithm, which enables the efficient solution of the proposed model. And we analyze the convergence of the algorithm in detail both theoretically and experimentally. In addition, the results of numerical experiments demonstrate that the proposed model outperforms existing competing models in performance.
Related papers
- Tailed Low-Rank Matrix Factorization for Similarity Matrix Completion [14.542166904874147]
Similarity Completion Matrix serves as a fundamental tool at the core of numerous machinelearning tasks.
To address this issue, Similarity Matrix Theoretical (SMC) methods have been proposed, but they suffer complexity.
We present two novel, scalable, and effective algorithms, which investigate the PSD property to guide the estimation process and incorporate non low-rank regularizer to ensure the low-rank solution.
arXiv Detail & Related papers (2024-09-29T04:27:23Z) - Spectral Entry-wise Matrix Estimation for Low-Rank Reinforcement
Learning [53.445068584013896]
We study matrix estimation problems arising in reinforcement learning (RL) with low-rank structure.
In low-rank bandits, the matrix to be recovered specifies the expected arm rewards, and for low-rank Markov Decision Processes (MDPs), it may for example characterize the transition kernel of the MDP.
We show that simple spectral-based matrix estimation approaches efficiently recover the singular subspaces of the matrix and exhibit nearly-minimal entry-wise error.
arXiv Detail & Related papers (2023-10-10T17:06:41Z) - Mode-wise Principal Subspace Pursuit and Matrix Spiked Covariance Model [13.082805815235975]
We introduce a novel framework called Mode-wise Principal Subspace Pursuit (MOP-UP) to extract hidden variations in both the row and column dimensions for matrix data.
The effectiveness and practical merits of the proposed framework are demonstrated through experiments on both simulated and real datasets.
arXiv Detail & Related papers (2023-07-02T13:59:47Z) - Synthetic data, real errors: how (not) to publish and use synthetic data [86.65594304109567]
We show how the generative process affects the downstream ML task.
We introduce Deep Generative Ensemble (DGE) to approximate the posterior distribution over the generative process model parameters.
arXiv Detail & Related papers (2023-05-16T07:30:29Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - From graphs to DAGs: a low-complexity model and a scalable algorithm [0.0]
This paper presents a low-complexity model, called LoRAM for Low-Rank Additive Model, which combines low-rank matrix factorization with a sparsification mechanism for the continuous optimization of DAGs.
The proposed approach achieves a reduction from a cubic complexity to quadratic complexity while handling the same DAG characteristic function as NoTears.
arXiv Detail & Related papers (2022-04-10T10:22:56Z) - Weighted Low Rank Matrix Approximation and Acceleration [0.5177947445379687]
Low-rank matrix approximation is one of the central concepts in machine learning.
Low-rank matrix completion (LRMC) solves the LRMA problem when some observations are missing.
We propose an algorithm for solving the weighted problem, as well as two acceleration techniques.
arXiv Detail & Related papers (2021-09-22T22:03:48Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Entropy Minimizing Matrix Factorization [102.26446204624885]
Nonnegative Matrix Factorization (NMF) is a widely-used data analysis technique, and has yielded impressive results in many real-world tasks.
In this study, an Entropy Minimizing Matrix Factorization framework (EMMF) is developed to tackle the above problem.
Considering that the outliers are usually much less than the normal samples, a new entropy loss function is established for matrix factorization.
arXiv Detail & Related papers (2021-03-24T21:08:43Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.