Implicit SVD for Graph Representation Learning
- URL: http://arxiv.org/abs/2111.06312v1
- Date: Thu, 11 Nov 2021 16:58:17 GMT
- Title: Implicit SVD for Graph Representation Learning
- Authors: Sami Abu-El-Haija, Hesham Mostafa, Marcel Nassar, Valentino Crespi,
Greg Ver Steeg, Aram Galstyan
- Abstract summary: We make Graph Representational Learning more computationally tractable for those with modest hardware.
We derive linear approximation of a SOTA model, and train the model, in closed-form, via SVD of $mathbfM$, without calculating entries.
Our models show competitive empirical test performance over various graphs.
- Score: 33.761179632722
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent improvements in the performance of state-of-the-art (SOTA) methods for
Graph Representational Learning (GRL) have come at the cost of significant
computational resource requirements for training, e.g., for calculating
gradients via backprop over many data epochs. Meanwhile, Singular Value
Decomposition (SVD) can find closed-form solutions to convex problems, using
merely a handful of epochs. In this paper, we make GRL more computationally
tractable for those with modest hardware. We design a framework that computes
SVD of \textit{implicitly} defined matrices, and apply this framework to
several GRL tasks. For each task, we derive linear approximation of a SOTA
model, where we design (expensive-to-store) matrix $\mathbf{M}$ and train the
model, in closed-form, via SVD of $\mathbf{M}$, without calculating entries of
$\mathbf{M}$. By converging to a unique point in one step, and without
calculating gradients, our models show competitive empirical test performance
over various graphs such as article citation and biological interaction
networks. More importantly, SVD can initialize a deeper model, that is
architected to be non-linear almost everywhere, though behaves linearly when
its parameters reside on a hyperplane, onto which SVD initializes. The deeper
model can then be fine-tuned within only a few epochs. Overall, our procedure
trains hundreds of times faster than state-of-the-art methods, while competing
on empirical test performance. We open-source our implementation at:
https://github.com/samihaija/isvd
Related papers
- Compute Better Spent: Replacing Dense Layers with Structured Matrices [77.61728033234233]
We identify more efficient alternatives to dense matrices, as exemplified by the success of convolutional networks in the image domain.
We show that different structures often require drastically different initialization scales and learning rates, which are crucial to performance.
We propose a novel matrix family containing Monarch matrices, the Block-Train, which we show performs better than dense for the same compute on multiple tasks.
arXiv Detail & Related papers (2024-06-10T13:25:43Z) - Gradients of Functions of Large Matrices [18.361820028457718]
We show how to differentiate workhorses of numerical linear algebra efficiently.
We derive previously unknown adjoint systems for Lanczos and Arnoldi iterations, implement them in JAX, and show that the resulting code can compete with Diffrax.
All this is achieved without any problem-specific code optimisation.
arXiv Detail & Related papers (2024-05-27T15:39:45Z) - Condensing Graphs via One-Step Gradient Matching [50.07587238142548]
We propose a one-step gradient matching scheme, which performs gradient matching for only one single step without training the network weights.
Our theoretical analysis shows this strategy can generate synthetic graphs that lead to lower classification loss on real graphs.
In particular, we are able to reduce the dataset size by 90% while approximating up to 98% of the original performance.
arXiv Detail & Related papers (2022-06-15T18:20:01Z) - uGLAD: Sparse graph recovery by optimizing deep unrolled networks [11.48281545083889]
We present a novel technique to perform sparse graph recovery by optimizing deep unrolled networks.
Our model, uGLAD, builds upon and extends the state-of-the-art model GLAD to the unsupervised setting.
We evaluate model results on synthetic Gaussian data, non-Gaussian data generated from Gene Regulatory Networks, and present a case study in anaerobic digestion.
arXiv Detail & Related papers (2022-05-23T20:20:27Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - Meta-Learning with Adjoint Methods [16.753336086160598]
A Meta-Learning (MAML) is widely used to find a good initialization for a family of tasks.
Despite its success, a critical challenge in MAML is to calculate the gradient w.r.t the initialization of a long training trajectory for the sampled tasks.
We propose Adjoint MAML (A-MAML) to address this problem.
We demonstrate the advantage of our approach in both synthetic and real-world meta-learning tasks.
arXiv Detail & Related papers (2021-10-16T01:18:50Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Why Approximate Matrix Square Root Outperforms Accurate SVD in Global
Covariance Pooling? [59.820507600960745]
We propose a new GCP meta-layer that uses SVD in the forward pass, and Pad'e Approximants in the backward propagation to compute the gradients.
The proposed meta-layer has been integrated into different CNN models and achieves state-of-the-art performances on both large-scale and fine-grained datasets.
arXiv Detail & Related papers (2021-05-06T08:03:45Z) - Learning to extrapolate using continued fractions: Predicting the
critical temperature of superconductor materials [5.905364646955811]
In the field of Artificial Intelligence (AI) and Machine Learning (ML), the approximation of unknown target functions $y=f(mathbfx)$ is a common objective.
We refer to $S$ as the training set and aim to identify a low-complexity mathematical model that can effectively approximate this target function for new instances $mathbfx$.
arXiv Detail & Related papers (2020-11-27T04:57:40Z) - Learning Low-rank Deep Neural Networks via Singular Vector Orthogonality
Regularization and Singular Value Sparsification [53.50708351813565]
We propose SVD training, the first method to explicitly achieve low-rank DNNs during training without applying SVD on every step.
We empirically show that SVD training can significantly reduce the rank of DNN layers and achieve higher reduction on computation load under the same accuracy.
arXiv Detail & Related papers (2020-04-20T02:40:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.