L2G2G: a Scalable Local-to-Global Network Embedding with Graph
Autoencoders
- URL: http://arxiv.org/abs/2402.01614v1
- Date: Fri, 2 Feb 2024 18:24:37 GMT
- Title: L2G2G: a Scalable Local-to-Global Network Embedding with Graph
Autoencoders
- Authors: Ruikang Ouyang, Andrew Elliott, Stratis Limnios, Mihai Cucuringu,
Gesine Reinert
- Abstract summary: graph representation learning is a popular tool for analysing real-world networks.
GAEs tend to be fairly accurate, but they suffer from scalability issues.
For improved speed, a Local2Global approach was shown to be fast and achieve good accuracy.
Here we propose L2G2G, a Local2Global method which improves GAE accuracy without sacrificing scalability.
- Score: 6.945992777272943
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For analysing real-world networks, graph representation learning is a popular
tool. These methods, such as a graph autoencoder (GAE), typically rely on
low-dimensional representations, also called embeddings, which are obtained
through minimising a loss function; these embeddings are used with a decoder
for downstream tasks such as node classification and edge prediction. While
GAEs tend to be fairly accurate, they suffer from scalability issues. For
improved speed, a Local2Global approach, which combines graph patch embeddings
based on eigenvector synchronisation, was shown to be fast and achieve good
accuracy. Here we propose L2G2G, a Local2Global method which improves GAE
accuracy without sacrificing scalability. This improvement is achieved by
dynamically synchronising the latent node representations, while training the
GAEs. It also benefits from the decoder computing an only local patch loss.
Hence, aligning the local embeddings in each epoch utilises more information
from the graph than a single post-training alignment does, while maintaining
scalability. We illustrate on synthetic benchmarks, as well as real-world
examples, that L2G2G achieves higher accuracy than the standard Local2Global
approach and scales efficiently on the larger data sets. We find that for large
and dense networks, it even outperforms the slow, but assumed more accurate,
GAEs.
Related papers
- A Scalable and Effective Alternative to Graph Transformers [19.018320937729264]
Graph Transformers (GTs) were introduced, utilizing self-attention mechanism to model pairwise node relationships.
GTs suffer from complexity w.r.t. the number of nodes in the graph, hindering their applicability to large graphs.
We present Graph-Enhanced Contextual Operator (GECO), a scalable and effective alternative to GTs.
arXiv Detail & Related papers (2024-06-17T19:57:34Z) - FedGT: Federated Node Classification with Scalable Graph Transformer [27.50698154862779]
We propose a scalable textbfFederated textbfGraph textbfTransformer (textbfFedGT) in the paper.
FedGT computes clients' similarity based on the aligned global nodes with optimal transport.
arXiv Detail & Related papers (2024-01-26T21:02:36Z) - Graph Transformers for Large Graphs [57.19338459218758]
This work advances representation learning on single large-scale graphs with a focus on identifying model characteristics and critical design constraints.
A key innovation of this work lies in the creation of a fast neighborhood sampling technique coupled with a local attention mechanism.
We report a 3x speedup and 16.8% performance gain on ogbn-products and snap-patents, while we also scale LargeGT on ogbn-100M with a 5.9% performance improvement.
arXiv Detail & Related papers (2023-12-18T11:19:23Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - IGLU: Efficient GCN Training via Lazy Updates [17.24386142849498]
Graph Convolution Networks (GCN) are used in numerous settings involving a large underlying graph as well as several layers.
Standard SGD-based training scales poorly here since each descent step ends up updating node embeddings for a large portion of the graph.
We introduce a new method IGLU that caches forward-pass embeddings for all nodes at various GCN layers.
arXiv Detail & Related papers (2021-09-28T19:11:00Z) - Local2Global: Scaling global representation learning on graphs via local
training [6.292766967410996]
We propose a decentralised "local2global" approach to graph representation learning.
We train local representations for each patch independently and combine the local representations into a globally consistent representation.
Preliminary results on medium-scale data sets are promising, with a graph reconstruction performance for local2global that is comparable to that of globally trained embeddings.
arXiv Detail & Related papers (2021-07-26T14:08:31Z) - GNNAutoScale: Scalable and Expressive Graph Neural Networks via
Historical Embeddings [51.82434518719011]
GNNAutoScale (GAS) is a framework for scaling arbitrary message-passing GNNs to large graphs.
Gas prunes entire sub-trees of the computation graph by utilizing historical embeddings from prior training iterations.
Gas reaches state-of-the-art performance on large-scale graphs.
arXiv Detail & Related papers (2021-06-10T09:26:56Z) - Scaling Graph Neural Networks with Approximate PageRank [64.92311737049054]
We present the PPRGo model which utilizes an efficient approximation of information diffusion in GNNs.
In addition to being faster, PPRGo is inherently scalable, and can be trivially parallelized for large datasets like those found in industry settings.
We show that training PPRGo and predicting labels for all nodes in this graph takes under 2 minutes on a single machine, far outpacing other baselines on the same graph.
arXiv Detail & Related papers (2020-07-03T09:30:07Z) - Fast Graph Attention Networks Using Effective Resistance Based Graph
Sparsification [70.50751397870972]
FastGAT is a method to make attention based GNNs lightweight by using spectral sparsification to generate an optimal pruning of the input graph.
We experimentally evaluate FastGAT on several large real world graph datasets for node classification tasks.
arXiv Detail & Related papers (2020-06-15T22:07:54Z) - Graph Highway Networks [77.38665506495553]
Graph Convolution Networks (GCN) are widely used in learning graph representations due to their effectiveness and efficiency.
They suffer from the notorious over-smoothing problem, in which the learned representations converge to alike vectors when many layers are stacked.
We propose Graph Highway Networks (GHNet) which utilize gating units to balance the trade-off between homogeneity and heterogeneity in the GCN learning process.
arXiv Detail & Related papers (2020-04-09T16:26:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.