Tree++: Truncated Tree Based Graph Kernels
- URL: http://arxiv.org/abs/2002.09846v1
- Date: Sun, 23 Feb 2020 07:07:10 GMT
- Title: Tree++: Truncated Tree Based Graph Kernels
- Authors: Wei Ye, Zhen Wang, Rachel Redberg, Ambuj Singh
- Abstract summary: Graph kernels are often used to decompose graphs into substructures and compare these substructures.
To tackle this problem, we propose a new graph kernel called Tree++ in this paper.
Tree++ achieves the best classification accuracy compared with previous graph kernels.
- Score: 5.819012768798548
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Graph-structured data arise ubiquitously in many application domains. A
fundamental problem is to quantify their similarities. Graph kernels are often
used for this purpose, which decompose graphs into substructures and compare
these substructures. However, most of the existing graph kernels do not have
the property of scale-adaptivity, i.e., they cannot compare graphs at multiple
levels of granularities. Many real-world graphs such as molecules exhibit
structure at varying levels of granularities. To tackle this problem, we
propose a new graph kernel called Tree++ in this paper. At the heart of Tree++
is a graph kernel called the path-pattern graph kernel. The path-pattern graph
kernel first builds a truncated BFS tree rooted at each vertex and then uses
paths from the root to every vertex in the truncated BFS tree as features to
represent graphs. The path-pattern graph kernel can only capture graph
similarity at fine granularities. In order to capture graph similarity at
coarse granularities, we incorporate a new concept called super path into it.
The super path contains truncated BFS trees rooted at the vertices in a path.
Our evaluation on a variety of real-world graphs demonstrates that Tree++
achieves the best classification accuracy compared with previous graph kernels.
Related papers
- G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering [61.93058781222079]
We develop a flexible question-answering framework targeting real-world textual graphs.
We introduce the first retrieval-augmented generation (RAG) approach for general textual graphs.
G-Retriever performs RAG over a graph by formulating this task as a Prize-Collecting Steiner Tree optimization problem.
arXiv Detail & Related papers (2024-02-12T13:13:04Z) - Finding the Missing-half: Graph Complementary Learning for
Homophily-prone and Heterophily-prone Graphs [48.79929516665371]
Graphs with homophily-prone edges tend to connect nodes with the same class.
Heterophily-prone edges tend to build relationships between nodes with different classes.
Existing GNNs only take the original graph during training.
arXiv Detail & Related papers (2023-06-13T08:06:10Z) - Structure-free Graph Condensation: From Large-scale Graphs to Condensed
Graph-free Data [91.27527985415007]
Existing graph condensation methods rely on the joint optimization of nodes and structures in the condensed graph.
We advocate a new Structure-Free Graph Condensation paradigm, named SFGC, to distill a large-scale graph into a small-scale graph node set.
arXiv Detail & Related papers (2023-06-05T07:53:52Z) - Graph Construction using Principal Axis Trees for Simple Graph
Convolution [0.6554326244334866]
We introduce a graph construction scheme that constructs the adjacency matrix $A$ using unsupervised and supervised information.
We tested this graph construction scheme on two well-known Graph Neural Networks (GNNs)
arXiv Detail & Related papers (2023-02-22T12:02:23Z) - Multi-scale Wasserstein Shortest-path Graph Kernels for Graph Classification [24.041871640927738]
We propose a novel graph kernel called the Multi-scale Wasserstein Shortest-Path graph kernel (MWSP)
We show that MWSP achieves state-of-the-art performance on most datasets.
arXiv Detail & Related papers (2022-06-02T10:50:46Z) - Neural Trees for Learning on Graphs [19.05038106825347]
Graph Neural Networks (GNNs) have emerged as a flexible and powerful approach for learning over graphs.
We propose a new GNN architecture -- the Neural Tree.
We show that the neural tree architecture can approximate any smooth probability distribution function over an undirected graph.
arXiv Detail & Related papers (2021-05-15T17:08:20Z) - Transport based Graph Kernels [30.541423115387097]
Graph kernel is a powerful tool measuring the similarity between graphs.
Most of the existing graph kernels focused on node labels or attributes and ignored graph hierarchical structure information.
We propose pyramid graph kernel based on optimal transport (OT)
We evaluate the proposed graph kernels on several benchmark classification tasks and compare their performance with the existing state-of-the-art graph kernels.
arXiv Detail & Related papers (2020-11-02T04:44:27Z) - Dirichlet Graph Variational Autoencoder [65.94744123832338]
We present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors.
Motivated by the low pass characteristics in balanced graph cut, we propose a new variant of GNN named Heatts to encode the input graph into cluster memberships.
arXiv Detail & Related papers (2020-10-09T07:35:26Z) - Multilevel Graph Matching Networks for Deep Graph Similarity Learning [79.3213351477689]
We propose a multi-level graph matching network (MGMN) framework for computing the graph similarity between any pair of graph-structured objects.
To compensate for the lack of standard benchmark datasets, we have created and collected a set of datasets for both the graph-graph classification and graph-graph regression tasks.
Comprehensive experiments demonstrate that MGMN consistently outperforms state-of-the-art baseline models on both the graph-graph classification and graph-graph regression tasks.
arXiv Detail & Related papers (2020-07-08T19:48:19Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Learning Deep Graph Representations via Convolutional Neural Networks [7.1945109570193795]
We propose a framework called DeepMap to learn deep representations for graph feature maps.
The learned deep representation for a graph is a dense and low-dimensional vector that captures complex high-order interactions.
We empirically validate DeepMap on various graph classification benchmarks and demonstrate that it achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-04-05T08:49:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.