Sign and Basis Invariant Networks for Spectral Graph Representation
Learning
- URL: http://arxiv.org/abs/2202.13013v1
- Date: Fri, 25 Feb 2022 23:11:59 GMT
- Title: Sign and Basis Invariant Networks for Spectral Graph Representation
Learning
- Authors: Derek Lim, Joshua Robinson, Lingxiao Zhao, Tess Smidt, Suvrit Sra,
Haggai Maron, Stefanie Jegelka
- Abstract summary: We introduce SignNet and BasisNet -- new neural architectures that are invariant to all requisite symmetries and hence process collections of eigenspaces in a principled manner.
Our networks are theoretically strong for graph representation learning -- they can approximate any spectral graph convolution.
Experiments show the strength of our networks for learning spectral graph filters and learning graph positional encodings.
- Score: 75.18802152811539
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many machine learning tasks involve processing eigenvectors derived from
data. Especially valuable are Laplacian eigenvectors, which capture useful
structural information about graphs and other geometric objects. However,
ambiguities arise when computing eigenvectors: for each eigenvector $v$, the
sign flipped $-v$ is also an eigenvector. More generally, higher dimensional
eigenspaces contain infinitely many choices of basis eigenvectors. These
ambiguities make it a challenge to process eigenvectors and eigenspaces in a
consistent way. In this work we introduce SignNet and BasisNet -- new neural
architectures that are invariant to all requisite symmetries and hence process
collections of eigenspaces in a principled manner. Our networks are universal,
i.e., they can approximate any continuous function of eigenvectors with the
proper invariances. They are also theoretically strong for graph representation
learning -- they can approximate any spectral graph convolution, can compute
spectral invariants that go beyond message passing neural networks, and can
provably simulate previously proposed graph positional encodings. Experiments
show the strength of our networks for learning spectral graph filters and
learning graph positional encodings.
Related papers
- Towards Stable, Globally Expressive Graph Representations with Laplacian Eigenvectors [29.055130767451036]
We propose a novel method exploiting Laplacian eigenvectors to generate stable and globally expressive graph representations.
Our method deals with numerically close eigenvalues in a smooth fashion, ensuring its better robustness against perturbations.
arXiv Detail & Related papers (2024-10-13T06:02:25Z) - Graph Generation via Spectral Diffusion [51.60814773299899]
We present GRASP, a novel graph generative model based on 1) the spectral decomposition of the graph Laplacian matrix and 2) a diffusion process.
Specifically, we propose to use a denoising model to sample eigenvectors and eigenvalues from which we can reconstruct the graph Laplacian and adjacency matrix.
Our permutation invariant model can also handle node features by concatenating them to the eigenvectors of each node.
arXiv Detail & Related papers (2024-02-29T09:26:46Z) - Expressive Sign Equivariant Networks for Spectral Geometric Learning [47.71042325868781]
Recent work has shown the utility of developing machine learning models that respect the structure and symmetries of eigenvectors.
We show that sign invariance is theoretically limited for tasks such as building equivariant models and learning node positional encodings for link prediction in graphs.
arXiv Detail & Related papers (2023-12-04T20:48:18Z) - On the Stability of Expressive Positional Encodings for Graphs [46.967035678550594]
Using Laplacian eigenvectors as positional encodings faces two fundamental challenges.
We introduce Stable and Expressive Positional generalizations (SPE)
SPE is the first architecture that is (1) provably stable, and (2) universally expressive for basis invariant functions.
arXiv Detail & Related papers (2023-10-04T04:48:55Z) - Graph Neural Networks with Learnable Structural and Positional
Representations [83.24058411666483]
A major issue with arbitrary graphs is the absence of canonical positional information of nodes.
We introduce Positional nodes (PE) of nodes, and inject it into the input layer, like in Transformers.
We observe a performance increase for molecular datasets, from 2.87% up to 64.14% when considering learnable PE for both GNN classes.
arXiv Detail & Related papers (2021-10-15T05:59:15Z) - Using Deep LSD to build operators in GANs latent space with meaning in
real space [0.0]
Lack of correlation is important because it suggests that the latent space manifold is simpler to understand and manipulate.
Generative models are widely used in deep learning, e.g., variational autoencoders (VAEs) and generative adversarial networks (GANs)
arXiv Detail & Related papers (2021-02-09T21:05:20Z) - Learning Sparse Graph Laplacian with K Eigenvector Prior via Iterative
GLASSO and Projection [58.5350491065936]
We consider a structural assumption on the graph Laplacian matrix $L$.
The first $K$ eigenvectors of $L$ are pre-selected, e.g., based on domain-specific criteria.
We design an efficient hybrid graphical lasso/projection algorithm to compute the most suitable graph Laplacian matrix $L* in H_u+$ given $barC$.
arXiv Detail & Related papers (2020-10-25T18:12:50Z) - Eigendecomposition-Free Training of Deep Networks for Linear
Least-Square Problems [107.3868459697569]
We introduce an eigendecomposition-free approach to training a deep network.
We show that our approach is much more robust than explicit differentiation of the eigendecomposition.
Our method has better convergence properties and yields state-of-the-art results.
arXiv Detail & Related papers (2020-04-15T04:29:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.