Fast Attributed Graph Embedding via Density of States
- URL: http://arxiv.org/abs/2110.05228v1
- Date: Mon, 11 Oct 2021 12:44:58 GMT
- Title: Fast Attributed Graph Embedding via Density of States
- Authors: Saurabh Sawlani, Lingxiao Zhao, Leman Akoglu
- Abstract summary: We propose A-DOGE, for Attributed DOS-based Graph Embedding.
A-DOGE is designed to fulfill a long desiderata of desirable characteristics.
Being based on the entire eigenspectrum of a graph, A-DOGE can capture structural and attribute properties at multiple "glocal" scales.
- Score: 17.360163137925994
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Given a node-attributed graph, how can we efficiently represent it with few
numerical features that expressively reflect its topology and attribute
information? We propose A-DOGE, for Attributed DOS-based Graph Embedding, based
on density of states (DOS, a.k.a. spectral density) to tackle this problem.
A-DOGE is designed to fulfill a long desiderata of desirable characteristics.
Most notably, it capitalizes on efficient approximation algorithms for DOS,
that we extend to blend in node labels and attributes for the first time,
making it fast and scalable for large attributed graphs and graph databases.
Being based on the entire eigenspectrum of a graph, A-DOGE can capture
structural and attribute properties at multiple ("glocal") scales. Moreover, it
is unsupervised (i.e. agnostic to any specific objective) and lends itself to
various interpretations, which makes it is suitable for exploratory graph
mining tasks. Finally, it processes each graph independent of others, making it
amenable for streaming settings as well as parallelization. Through extensive
experiments, we show the efficacy and efficiency of A-DOGE on exploratory graph
analysis and graph classification tasks, where it significantly outperforms
unsupervised baselines and achieves competitive performance with modern
supervised GNNs, while achieving the best trade-off between accuracy and
runtime.
Related papers
- Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - TouchUp-G: Improving Feature Representation through Graph-Centric
Finetuning [37.318961625795204]
Graph Neural Networks (GNNs) have become the state-of-the-art approach for many high-impact, real-world graph applications.
For feature-rich graphs, a prevalent practice involves utilizing a PM directly to generate features.
This practice is suboptimal because the node features extracted from PM are graph-agnostic and prevent GNNs from fully utilizing the potential correlations between the graph structure and node features.
arXiv Detail & Related papers (2023-09-25T05:44:40Z) - Graph Mixture of Experts: Learning on Large-Scale Graphs with Explicit
Diversity Modeling [60.0185734837814]
Graph neural networks (GNNs) have found extensive applications in learning from graph data.
To bolster the generalization capacity of GNNs, it has become customary to augment training graph structures with techniques like graph augmentations.
This study introduces the concept of Mixture-of-Experts (MoE) to GNNs, with the aim of augmenting their capacity to adapt to a diverse range of training graph structures.
arXiv Detail & Related papers (2023-04-06T01:09:36Z) - LSP : Acceleration and Regularization of Graph Neural Networks via
Locality Sensitive Pruning of Graphs [2.4250821950628234]
Graph Neural Networks (GNNs) have emerged as highly successful tools for graph-related tasks.
Large graphs often involve many redundant components that can be removed without compromising the performance.
We propose a systematic method called Locality-Sensitive Pruning (LSP) for graph pruning based on Locality-Sensitive Hashing.
arXiv Detail & Related papers (2021-11-10T14:12:28Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - Graph Networks with Spectral Message Passing [1.0742675209112622]
We introduce the Spectral Graph Network, which applies message passing to both the spatial and spectral domains.
Our results show that the Spectral GN promotes efficient training, reaching high performance with fewer training iterations despite having more parameters.
arXiv Detail & Related papers (2020-12-31T21:33:17Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Pseudoinverse Graph Convolutional Networks: Fast Filters Tailored for
Large Eigengaps of Dense Graphs and Hypergraphs [0.0]
Graph Convolutional Networks (GCNs) have proven to be successful tools for semi-supervised classification on graph-based datasets.
We propose a new GCN variant whose three-part filter space is targeted at dense graphs.
arXiv Detail & Related papers (2020-08-03T08:48:41Z) - Graph Representation Learning Network via Adaptive Sampling [4.996520403438455]
Graph Attention Network (GAT) and GraphSAGE are neural network architectures that operate on graph-structured data.
One challenge raised by GraphSAGE is how to smartly combine neighbour features based on graph structure.
We propose a new architecture to address these issues that is more efficient and is capable of incorporating different edge type information.
arXiv Detail & Related papers (2020-06-08T14:36:20Z) - Structural Temporal Graph Neural Networks for Anomaly Detection in
Dynamic Graphs [54.13919050090926]
We propose an end-to-end structural temporal Graph Neural Network model for detecting anomalous edges in dynamic graphs.
In particular, we first extract the $h$-hop enclosing subgraph centered on the target edge and propose the node labeling function to identify the role of each node in the subgraph.
Based on the extracted features, we utilize Gated recurrent units (GRUs) to capture the temporal information for anomaly detection.
arXiv Detail & Related papers (2020-05-15T09:17:08Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.