Learning to Solve Multiresolution Matrix Factorization by Manifold Optimization and Evolutionary Metaheuristics
- URL: http://arxiv.org/abs/2406.00469v2
- Date: Sun, 18 Aug 2024 02:27:11 GMT
- Title: Learning to Solve Multiresolution Matrix Factorization by Manifold Optimization and Evolutionary Metaheuristics
- Authors: Truong Son Hy, Thieu Khang, Risi Kondor,
- Abstract summary: We propose a learnable'' version of Multiresolution Matrix Factorization (MMF)
We show that the resulting wavelet basis far outperforms prior MMF algorithms.
We construct the wavelet neural networks (WNNs) learning graphs on the spectral domain with the wavelet basis produced by our MMF learning algorithm.
- Score: 6.4671761956203655
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multiresolution Matrix Factorization (MMF) is unusual amongst fast matrix factorization algorithms in that it does not make a low rank assumption. This makes MMF especially well suited to modeling certain types of graphs with complex multiscale or hierarchical strucutre. While MMF promises to yields a useful wavelet basis, finding the factorization itself is hard, and existing greedy methods tend to be brittle. In this paper, we propose a ``learnable'' version of MMF that carfully optimizes the factorization using metaheuristics, specifically evolutionary algorithms and directed evolution, along with Stiefel manifold optimization through backpropagating errors. We show that the resulting wavelet basis far outperforms prior MMF algorithms and gives comparable performance on standard learning tasks on graphs. Furthermore, we construct the wavelet neural networks (WNNs) learning graphs on the spectral domain with the wavelet basis produced by our MMF learning algorithm. Our wavelet networks are competitive against other state-of-the-art methods in molecular graphs classification and node classification on citation graphs. We release our implementation at https://github.com/HySonLab/LearnMMF
Related papers
- Quadratic Matrix Factorization with Applications to Manifold Learning [1.6795461001108094]
We propose a quadratic matrix factorization (QMF) framework to learn the curved manifold on which the dataset lies.
Algorithmically, we propose an alternating minimization algorithm to optimize QMF and establish its theoretical convergence properties.
Experiments on a synthetic manifold learning dataset and two real datasets, including the MNIST handwritten dataset and a cryogenic electron microscopy dataset, demonstrate the superiority of the proposed method over its competitors.
arXiv Detail & Related papers (2023-01-30T15:09:00Z) - Graph Polynomial Convolution Models for Node Classification of
Non-Homophilous Graphs [52.52570805621925]
We investigate efficient learning from higher-order graph convolution and learning directly from adjacency matrix for node classification.
We show that the resulting model lead to new graphs and residual scaling parameter.
We demonstrate that the proposed methods obtain improved accuracy for node-classification of non-homophilous parameters.
arXiv Detail & Related papers (2022-09-12T04:46:55Z) - Unitary Approximate Message Passing for Matrix Factorization [90.84906091118084]
We consider matrix factorization (MF) with certain constraints, which finds wide applications in various areas.
We develop a Bayesian approach to MF with an efficient message passing implementation, called UAMPMF.
We show that UAMPMF significantly outperforms state-of-the-art algorithms in terms of recovery accuracy, robustness and computational complexity.
arXiv Detail & Related papers (2022-07-31T12:09:32Z) - FastSTMF: Efficient tropical matrix factorization algorithm for sparse
data [0.0]
Matrix factorization, one of the most popular methods in machine learning, has recently benefited from introducing non-linearity in prediction tasks using tropical semiring.
In our work, we propose a new method FastSTMF based on Sparse Tropical Matrix Factorization (STMF)
We evaluate FastSTMF on synthetic and real gene expression data from the TCGA database, and the results show that FastSTMF outperforms STMF in both accuracy and running time.
arXiv Detail & Related papers (2022-05-13T13:13:06Z) - Learning Multiresolution Matrix Factorization and its Wavelet Networks
on Graphs [11.256959274636724]
Multiresolution Matrix Factorization (MMF) is unusual amongst fast matrix factorization algorithms.
We propose a learnable version of MMF that carfully optimize the factorization with a combination of reinforcement learning and Stiefel manifold optimization.
We show that the resulting wavelet basis far outperforms prior MMF algorithms and provides the first version of this type of factorization that can be robustly deployed on standard learning tasks.
arXiv Detail & Related papers (2021-11-02T23:14:17Z) - Self-supervised Symmetric Nonnegative Matrix Factorization [82.59905231819685]
Symmetric nonnegative factor matrix (SNMF) has demonstrated to be a powerful method for data clustering.
Inspired by ensemble clustering that aims to seek better clustering results, we propose self-supervised SNMF (S$3$NMF)
We take advantage of the sensitivity to code characteristic of SNMF, without relying on any additional information.
arXiv Detail & Related papers (2021-03-02T12:47:40Z) - Evolutionary Algorithms for Fuzzy Cognitive Maps [0.0]
Fuzzy Cognitive Maps (FCMs) is a complex systems modeling technique which has lately risen in popularity.
The present study reviews the genetic algorithms used for training FCMs, as well as gives a general overview of the FCM learning algorithms.
arXiv Detail & Related papers (2020-12-19T15:17:01Z) - Positive Semidefinite Matrix Factorization: A Connection with Phase
Retrieval and Affine Rank Minimization [71.57324258813674]
We show that PSDMF algorithms can be designed based on phase retrieval (PR) and affine rank minimization (ARM) algorithms.
Motivated by this idea, we introduce a new family of PSDMF algorithms based on iterative hard thresholding (IHT)
arXiv Detail & Related papers (2020-07-24T06:10:19Z) - Compressive MR Fingerprinting reconstruction with Neural Proximal
Gradient iterations [27.259916894535404]
ProxNet is a learned proximal gradient descent framework that incorporates the forward acquisition and Bloch dynamic models within a recurrent learning mechanism.
Our numerical experiments show that the ProxNet can achieve a superior quantitative inference accuracy, much smaller storage requirement, and a comparable runtime to the recent deep learning MRF baselines.
arXiv Detail & Related papers (2020-06-27T03:52:22Z) - Towards Minimax Optimal Reinforcement Learning in Factored Markov
Decision Processes [53.72166325215299]
We study minimax optimal reinforcement learning in episodic factored Markov decision processes (FMDPs)
First one achieves minimax optimal regret guarantees for a rich class of factored structures.
Second one enjoys better computational complexity with a slightly worse regret.
arXiv Detail & Related papers (2020-06-24T00:50:17Z) - Predictive Coding Approximates Backprop along Arbitrary Computation
Graphs [68.8204255655161]
We develop a strategy to translate core machine learning architectures into their predictive coding equivalents.
Our models perform equivalently to backprop on challenging machine learning benchmarks.
Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry.
arXiv Detail & Related papers (2020-06-07T15:35:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.