Laplacian Canonization: A Minimalist Approach to Sign and Basis
Invariant Spectral Embedding
- URL: http://arxiv.org/abs/2310.18716v2
- Date: Wed, 10 Jan 2024 20:19:47 GMT
- Title: Laplacian Canonization: A Minimalist Approach to Sign and Basis
Invariant Spectral Embedding
- Authors: Jiangyan Ma, Yifei Wang, Yisen Wang
- Abstract summary: Spectral embedding is a powerful graph computation technique that has received a lot of attention recently due to its effectiveness on Graph Transformers.
Previous methods developed costly approaches to learn new invariants and suffer from high complexity.
In this work, we explore a minimal approach that resolves the ambiguity issues by directly finding canonical directions for the eigenvectors.
- Score: 36.61907023057978
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spectral embedding is a powerful graph embedding technique that has received
a lot of attention recently due to its effectiveness on Graph Transformers.
However, from a theoretical perspective, the universal expressive power of
spectral embedding comes at the price of losing two important invariance
properties of graphs, sign and basis invariance, which also limits its
effectiveness on graph data. To remedy this issue, many previous methods
developed costly approaches to learn new invariants and suffer from high
computation complexity. In this work, we explore a minimal approach that
resolves the ambiguity issues by directly finding canonical directions for the
eigenvectors, named Laplacian Canonization (LC). As a pure pre-processing
method, LC is light-weighted and can be applied to any existing GNNs. We
provide a thorough investigation, from theory to algorithm, on this approach,
and discover an efficient algorithm named Maximal Axis Projection (MAP) that
works for both sign and basis invariance and successfully canonizes more than
90% of all eigenvectors. Experiments on real-world benchmark datasets like
ZINC, MOLTOX21, and MOLPCBA show that MAP consistently outperforms existing
methods while bringing minimal computation overhead. Code is available at
https://github.com/PKU-ML/LaplacianCanonization.
Related papers
- S-CFE: Simple Counterfactual Explanations [21.975560789792073]
We tackle the problem of finding manifold-aligned counterfactual explanations for sparse data.
Our approach effectively produces sparse, manifold-aligned counterfactual explanations.
arXiv Detail & Related papers (2024-10-21T07:42:43Z) - Learning Cartesian Product Graphs with Laplacian Constraints [10.15283812819547]
We study the problem of learning Cartesian product graphs under Laplacian constraints.
We establish statistical consistency for the penalized maximum likelihood estimation.
We also extend our method for efficient joint graph learning and imputation in the presence of structural missing values.
arXiv Detail & Related papers (2024-02-12T22:48:30Z) - Efficient Graph Laplacian Estimation by Proximal Newton [12.05527862797306]
A graph learning problem can be formulated as a maximum likelihood estimation (MLE) of the precision matrix.
We develop a second-order approach to obtain an efficient solver utilizing several algorithmic features.
arXiv Detail & Related papers (2023-02-13T15:13:22Z) - Graph Laplacian for Semi-Supervised Learning [8.477619837043214]
We propose a new type of graph-Laplacian adapted for Semi-Supervised Learning (SSL) problems.
It is based on both density and contrastive measures and allows the encoding of the labeled data directly in the operator.
arXiv Detail & Related papers (2023-01-12T12:02:26Z) - Softmax-free Linear Transformers [90.83157268265654]
Vision transformers (ViTs) have pushed the state-of-the-art for visual perception tasks.
Existing methods are either theoretically flawed or empirically ineffective for visual recognition.
We propose a family of Softmax-Free Transformers (SOFT)
arXiv Detail & Related papers (2022-07-05T03:08:27Z) - Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov
Random Fields [51.07460861448716]
This paper presents a convex-analytic framework to learn from data.
We show that a triangular convexity decomposition is guaranteed by a transform of the corresponding to its upper part.
arXiv Detail & Related papers (2021-09-17T17:46:12Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Pseudoinverse Graph Convolutional Networks: Fast Filters Tailored for
Large Eigengaps of Dense Graphs and Hypergraphs [0.0]
Graph Convolutional Networks (GCNs) have proven to be successful tools for semi-supervised classification on graph-based datasets.
We propose a new GCN variant whose three-part filter space is targeted at dense graphs.
arXiv Detail & Related papers (2020-08-03T08:48:41Z) - Gradient Free Minimax Optimization: Variance Reduction and Faster
Convergence [120.9336529957224]
In this paper, we denote the non-strongly setting on the magnitude of a gradient-free minimax optimization problem.
We show that a novel zeroth-order variance reduced descent algorithm achieves the best known query complexity.
arXiv Detail & Related papers (2020-06-16T17:55:46Z) - Spectral Learning on Matrices and Tensors [74.88243719463053]
We show that tensor decomposition can pick up latent effects that are missed by matrix methods.
We also outline computational techniques to design efficient tensor decomposition methods.
arXiv Detail & Related papers (2020-04-16T22:53:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.