Learning Deep Graph Representations via Convolutional Neural Networks
- URL: http://arxiv.org/abs/2004.02131v2
- Date: Mon, 24 Jan 2022 14:46:07 GMT
- Title: Learning Deep Graph Representations via Convolutional Neural Networks
- Authors: Wei Ye, Omid Askarisichani, Alex Jones, Ambuj Singh
- Abstract summary: We propose a framework called DeepMap to learn deep representations for graph feature maps.
The learned deep representation for a graph is a dense and low-dimensional vector that captures complex high-order interactions.
We empirically validate DeepMap on various graph classification benchmarks and demonstrate that it achieves state-of-the-art performance.
- Score: 7.1945109570193795
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Graph-structured data arise in many scenarios. A fundamental problem is to
quantify the similarities of graphs for tasks such as classification.
R-convolution graph kernels are positive-semidefinite functions that decompose
graphs into substructures and compare them. One problem in the effective
implementation of this idea is that the substructures are not independent,
which leads to high-dimensional feature space. In addition, graph kernels
cannot capture the high-order complex interactions between vertices. To
mitigate these two problems, we propose a framework called DeepMap to learn
deep representations for graph feature maps. The learned deep representation
for a graph is a dense and low-dimensional vector that captures complex
high-order interactions in a vertex neighborhood. DeepMap extends Convolutional
Neural Networks (CNNs) to arbitrary graphs by generating aligned vertex
sequences and building the receptive field for each vertex. We empirically
validate DeepMap on various graph classification benchmarks and demonstrate
that it achieves state-of-the-art performance.
Related papers
- LSEnet: Lorentz Structural Entropy Neural Network for Deep Graph Clustering [59.89626219328127]
Graph clustering is a fundamental problem in machine learning.
Deep learning methods achieve the state-of-the-art results in recent years, but they still cannot work without predefined cluster numbers.
We propose to address this problem from a fresh perspective of graph information theory.
arXiv Detail & Related papers (2024-05-20T05:46:41Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - FoSR: First-order spectral rewiring for addressing oversquashing in GNNs [0.0]
Graph neural networks (GNNs) are able to leverage the structure of graph data by passing messages along the edges of the graph.
We propose a computationally efficient algorithm that prevents oversquashing by systematically adding edges to the graph.
We find experimentally that our algorithm outperforms existing graph rewiring methods in several graph classification tasks.
arXiv Detail & Related papers (2022-10-21T07:58:03Z) - Boosting Graph Structure Learning with Dummy Nodes [41.83708114701956]
We extend graph kernels and graph neural networks with dummy nodes and conduct experiments on graph classification and subgraph isomorphism matching tasks.
We prove that such a dummy node can help build an efficient monomorphic edge-to-vertex transform and an epimorphic inverse to recover the original graph back.
arXiv Detail & Related papers (2022-06-17T05:44:24Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - Edge but not Least: Cross-View Graph Pooling [76.71497833616024]
This paper presents a cross-view graph pooling (Co-Pooling) method to better exploit crucial graph structure information.
Through cross-view interaction, edge-view pooling and node-view pooling seamlessly reinforce each other to learn more informative graph-level representations.
arXiv Detail & Related papers (2021-09-24T08:01:23Z) - GraphSVX: Shapley Value Explanations for Graph Neural Networks [81.83769974301995]
Graph Neural Networks (GNNs) achieve significant performance for various learning tasks on geometric data.
In this paper, we propose a unified framework satisfied by most existing GNN explainers.
We introduce GraphSVX, a post hoc local model-agnostic explanation method specifically designed for GNNs.
arXiv Detail & Related papers (2021-04-18T10:40:37Z) - Pyramidal Reservoir Graph Neural Network [18.632681846787246]
We propose a deep Graph Neural Network (GNN) model that alternates two types of layers.
We show how graph pooling can reduce the computational complexity of the model.
Our proposed approach to the design of RC-based GNNs offers an advantageous and principled trade-off between accuracy and complexity.
arXiv Detail & Related papers (2021-04-10T08:34:09Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Unsupervised Graph Representation by Periphery and Hierarchical
Information Maximization [18.7475578342125]
Invent of graph neural networks has improved the state-of-the-art for both node and the entire graph representation in a vector space.
For the entire graph representation, most of existing graph neural networks are trained on a graph classification loss in a supervised way.
We propose an unsupervised graph neural network to generate a vector representation of an entire graph in this paper.
arXiv Detail & Related papers (2020-06-08T15:50:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.