Graph-based Neural Acceleration for Nonnegative Matrix Factorization
- URL: http://arxiv.org/abs/2202.00264v1
- Date: Tue, 1 Feb 2022 07:52:01 GMT
- Title: Graph-based Neural Acceleration for Nonnegative Matrix Factorization
- Authors: Jens Sj\"olund and Maria B{\aa}nkestad
- Abstract summary: We describe a graph-based neural acceleration technique for nonnegative matrix factorization.
We train a graph neural network that interleaves bipartite self-attention layers with updates based on the alternating direction method of multipliers.
Our evaluation on synthetic and two real-world datasets shows that we attain substantial acceleration, even though we only train in an unsupervised fashion on smaller synthetic instances.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We describe a graph-based neural acceleration technique for nonnegative
matrix factorization that builds upon a connection between matrices and
bipartite graphs that is well-known in certain fields, e.g., sparse linear
algebra, but has not yet been exploited to design graph neural networks for
matrix computations. We first consider low-rank factorization more broadly and
propose a graph representation of the problem suited for graph neural networks.
Then, we focus on the task of nonnegative matrix factorization and propose a
graph neural network that interleaves bipartite self-attention layers with
updates based on the alternating direction method of multipliers. Our empirical
evaluation on synthetic and two real-world datasets shows that we attain
substantial acceleration, even though we only train in an unsupervised fashion
on smaller synthetic instances.
Related papers
- Low-Rank Covariance Completion for Graph Quilting with Applications to Functional Connectivity [8.500141848121782]
In many calcium imaging data sets, the full population of neurons is not recorded simultaneously, but instead in partially overlapping blocks.
In this paper, we introduce a novel two-step approach to Graph Quilting, which first imputes the nuclear structure matrix using low-rank co completion.
We show the efficacy of these methods for estimating connectivity from calcium imaging data.
arXiv Detail & Related papers (2022-09-17T08:03:46Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - Scaling Up Graph Neural Networks Via Graph Coarsening [18.176326897605225]
Scalability of graph neural networks (GNNs) is one of the major challenges in machine learning.
In this paper, we propose to use graph coarsening for scalable training of GNNs.
We show that, simply applying off-the-shelf coarsening methods, we can reduce the number of nodes by up to a factor of ten without causing a noticeable downgrade in classification accuracy.
arXiv Detail & Related papers (2021-06-09T15:46:17Z) - Multilayer Graph Clustering with Optimized Node Embedding [70.1053472751897]
multilayer graph clustering aims at dividing the graph nodes into categories or communities.
We propose a clustering-friendly embedding of the layers of a given multilayer graph.
Experiments show that our method leads to a significant improvement.
arXiv Detail & Related papers (2021-03-30T17:36:40Z) - Graph-Time Convolutional Neural Networks [9.137554315375919]
We represent spatial relationships through product graphs with a first principle graph-time convolutional neural network (GTCNN)
We develop a graph-time convolutional filter by following the shift-and-sumtemporal operator to learn higher-level features over the product graph.
We develop a zero-pad pooling that preserves the spatial graph while reducing the number of active nodes and the parameters.
arXiv Detail & Related papers (2021-03-02T14:03:44Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Semiparametric Nonlinear Bipartite Graph Representation Learning with
Provable Guarantees [106.91654068632882]
We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution.
We show that the proposed objective is strongly convex in a neighborhood around the ground truth, so that a gradient descent-based method achieves linear convergence rate.
Our estimator is robust to any model misspecification within the exponential family, which is validated in extensive experiments.
arXiv Detail & Related papers (2020-03-02T16:40:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.