Quadratic Matrix Factorization with Applications to Manifold Learning
- URL: http://arxiv.org/abs/2301.12965v1
- Date: Mon, 30 Jan 2023 15:09:00 GMT
- Title: Quadratic Matrix Factorization with Applications to Manifold Learning
- Authors: Zheng Zhai, Hengchao Chen, and Qiang Sun
- Abstract summary: We propose a quadratic matrix factorization (QMF) framework to learn the curved manifold on which the dataset lies.
Algorithmically, we propose an alternating minimization algorithm to optimize QMF and establish its theoretical convergence properties.
Experiments on a synthetic manifold learning dataset and two real datasets, including the MNIST handwritten dataset and a cryogenic electron microscopy dataset, demonstrate the superiority of the proposed method over its competitors.
- Score: 1.6795461001108094
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Matrix factorization is a popular framework for modeling low-rank data
matrices. Motivated by manifold learning problems, this paper proposes a
quadratic matrix factorization (QMF) framework to learn the curved manifold on
which the dataset lies. Unlike local linear methods such as the local principal
component analysis, QMF can better exploit the curved structure of the
underlying manifold. Algorithmically, we propose an alternating minimization
algorithm to optimize QMF and establish its theoretical convergence properties.
Moreover, to avoid possible over-fitting, we then propose a regularized QMF
algorithm and discuss how to tune its regularization parameter. Finally, we
elaborate how to apply the regularized QMF to manifold learning problems.
Experiments on a synthetic manifold learning dataset and two real datasets,
including the MNIST handwritten dataset and a cryogenic electron microscopy
dataset, demonstrate the superiority of the proposed method over its
competitors.
Related papers
- Large-Scale OD Matrix Estimation with A Deep Learning Method [70.78575952309023]
The proposed method integrates deep learning and numerical optimization algorithms to infer matrix structure and guide numerical optimization.
We conducted tests to demonstrate the good generalization performance of our method on a large-scale synthetic dataset.
arXiv Detail & Related papers (2023-10-09T14:30:06Z) - Large-scale gradient-based training of Mixtures of Factor Analyzers [67.21722742907981]
This article contributes both a theoretical analysis as well as a new method for efficient high-dimensional training by gradient descent.
We prove that MFA training and inference/sampling can be performed based on precision matrices, which does not require matrix inversions after training is completed.
Besides the theoretical analysis and matrices, we apply MFA to typical image datasets such as SVHN and MNIST, and demonstrate the ability to perform sample generation and outlier detection.
arXiv Detail & Related papers (2023-08-26T06:12:33Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - A Novel Maximum-Entropy-Driven Technique for Low-Rank Orthogonal
Nonnegative Matrix Factorization with $\ell_0$-Norm sparsity Constraint [0.0]
In data-driven control and machine learning, a common requirement involves breaking down large matrices into smaller, low-rank factors.
This paper introduces an innovative solution to the orthogonal nonnegative matrix factorization (ONMF) problem.
The proposed method achieves comparable or improved reconstruction errors in line with the literature.
arXiv Detail & Related papers (2022-10-06T04:30:59Z) - Non-Negative Matrix Factorization with Scale Data Structure Preservation [23.31865419578237]
The model described in this paper belongs to the family of non-negative matrix factorization methods designed for data representation and dimension reduction.
The idea is to add, to the NMF cost function, a penalty term to impose a scale relationship between the pairwise similarity matrices of the original and transformed data points.
The proposed clustering algorithm is compared to some existing NMF-based algorithms and to some manifold learning-based algorithms when applied to some real-life datasets.
arXiv Detail & Related papers (2022-09-22T09:32:18Z) - Unitary Approximate Message Passing for Matrix Factorization [90.84906091118084]
We consider matrix factorization (MF) with certain constraints, which finds wide applications in various areas.
We develop a Bayesian approach to MF with an efficient message passing implementation, called UAMPMF.
We show that UAMPMF significantly outperforms state-of-the-art algorithms in terms of recovery accuracy, robustness and computational complexity.
arXiv Detail & Related papers (2022-07-31T12:09:32Z) - Learning Multiresolution Matrix Factorization and its Wavelet Networks
on Graphs [11.256959274636724]
Multiresolution Matrix Factorization (MMF) is unusual amongst fast matrix factorization algorithms.
We propose a learnable version of MMF that carfully optimize the factorization with a combination of reinforcement learning and Stiefel manifold optimization.
We show that the resulting wavelet basis far outperforms prior MMF algorithms and provides the first version of this type of factorization that can be robustly deployed on standard learning tasks.
arXiv Detail & Related papers (2021-11-02T23:14:17Z) - Learning a Compressive Sensing Matrix with Structural Constraints via
Maximum Mean Discrepancy Optimization [17.104994036477308]
We introduce a learning-based algorithm to obtain a measurement matrix for compressive sensing related recovery problems.
Recent success of such metrics in neural network related topics motivate a solution of the problem based on machine learning.
arXiv Detail & Related papers (2021-10-14T08:35:54Z) - Feature Weighted Non-negative Matrix Factorization [92.45013716097753]
We propose the Feature weighted Non-negative Matrix Factorization (FNMF) in this paper.
FNMF learns the weights of features adaptively according to their importances.
It can be solved efficiently with the suggested optimization algorithm.
arXiv Detail & Related papers (2021-03-24T21:17:17Z) - Self-supervised Symmetric Nonnegative Matrix Factorization [82.59905231819685]
Symmetric nonnegative factor matrix (SNMF) has demonstrated to be a powerful method for data clustering.
Inspired by ensemble clustering that aims to seek better clustering results, we propose self-supervised SNMF (S$3$NMF)
We take advantage of the sensitivity to code characteristic of SNMF, without relying on any additional information.
arXiv Detail & Related papers (2021-03-02T12:47:40Z) - Hyperspectral Unmixing via Nonnegative Matrix Factorization with
Handcrafted and Learnt Priors [14.032039261229853]
We propose an NMF based unmixing framework which jointly uses a handcrafting regularizer and a learnt regularizer from data.
We plug learnt priors of abundances where the associated subproblem can be addressed using various image denoisers.
arXiv Detail & Related papers (2020-10-09T14:40:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.