A Hierarchical Transitive-Aligned Graph Kernel for Un-attributed Graphs
- URL: http://arxiv.org/abs/2002.04425v1
- Date: Sat, 8 Feb 2020 11:46:25 GMT
- Title: A Hierarchical Transitive-Aligned Graph Kernel for Un-attributed Graphs
- Authors: Lu Bai, Lixin Cui, Edwin R. Hancock
- Abstract summary: We develop a new graph kernel, namely the Hierarchical Transitive-Aligned kernel, by transitively aligning the vertices between graphs.
The proposed kernel can outperform state-of-the-art graph kernels on standard graph-based datasets in terms of the classification accuracy.
- Score: 11.51839867040302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we develop a new graph kernel, namely the Hierarchical
Transitive-Aligned kernel, by transitively aligning the vertices between graphs
through a family of hierarchical prototype graphs. Comparing to most existing
state-of-the-art graph kernels, the proposed kernel has three theoretical
advantages. First, it incorporates the locational correspondence information
between graphs into the kernel computation, and thus overcomes the shortcoming
of ignoring structural correspondences arising in most R-convolution kernels.
Second, it guarantees the transitivity between the correspondence information
that is not available for most existing matching kernels. Third, it
incorporates the information of all graphs under comparisons into the kernel
computation process, and thus encapsulates richer characteristics. By
transductively training the C-SVM classifier, experimental evaluations
demonstrate the effectiveness of the new transitive-aligned kernel. The
proposed kernel can outperform state-of-the-art graph kernels on standard
graph-based datasets in terms of the classification accuracy.
Related papers
- Deep Hierarchical Graph Alignment Kernels [16.574634620245487]
We introduce Deep Hierarchical Graph Alignment Kernels (DHGAK) to resolve this problem.
Specifically, the relational substructures are hierarchically aligned to cluster distributions in their deep embedding space.
DHGAK is positive semi-definite and has linear separability in the Reproducing Kernel Hilbert Space.
arXiv Detail & Related papers (2024-05-09T05:08:30Z) - Neural Tangent Kernels Motivate Graph Neural Networks with
Cross-Covariance Graphs [94.44374472696272]
We investigate NTKs and alignment in the context of graph neural networks (GNNs)
Our results establish the theoretical guarantees on the optimality of the alignment for a two-layer GNN.
These guarantees are characterized by the graph shift operator being a function of the cross-covariance between the input and the output data.
arXiv Detail & Related papers (2023-10-16T19:54:21Z) - Structure-free Graph Condensation: From Large-scale Graphs to Condensed
Graph-free Data [91.27527985415007]
Existing graph condensation methods rely on the joint optimization of nodes and structures in the condensed graph.
We advocate a new Structure-Free Graph Condensation paradigm, named SFGC, to distill a large-scale graph into a small-scale graph node set.
arXiv Detail & Related papers (2023-06-05T07:53:52Z) - AERK: Aligned Entropic Reproducing Kernels through Continuous-time
Quantum Walks [17.95088104970343]
We develop an Aligned Entropic Reproducing Kernel (AERK) for graph classification.
For pairwise graphs, the proposed AERK kernel is defined by computing a reproducing kernel based similarity between the quantum Shannon entropies of their each pair of aligned vertices.
The experimental evaluation on standard graph datasets demonstrates that the proposed AERK kernel is able to outperform state-of-the-art graph kernels for graph classification tasks.
arXiv Detail & Related papers (2023-03-04T16:48:39Z) - Transductive Kernels for Gaussian Processes on Graphs [7.542220697870243]
We present a novel kernel for graphs with node feature data for semi-supervised learning.
The kernel is derived from a regularization framework by treating the graph and feature data as two spaces.
We show how numerous kernel-based models on graphs are instances of our design.
arXiv Detail & Related papers (2022-11-28T14:00:50Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - Transport based Graph Kernels [30.541423115387097]
Graph kernel is a powerful tool measuring the similarity between graphs.
Most of the existing graph kernels focused on node labels or attributes and ignored graph hierarchical structure information.
We propose pyramid graph kernel based on optimal transport (OT)
We evaluate the proposed graph kernels on several benchmark classification tasks and compare their performance with the existing state-of-the-art graph kernels.
arXiv Detail & Related papers (2020-11-02T04:44:27Z) - Neighborhood Preserving Kernels for Attributed Graphs [0.9176056742068812]
We describe the design of a reproducing kernel suitable for attributed graphs.
The similarity between the two graphs is defined based on the neighborhood information of the graph nodes.
By incorporating the proposed kernel on support vector machines we analyzed the real-world data sets.
arXiv Detail & Related papers (2020-10-13T09:58:50Z) - Dirichlet Graph Variational Autoencoder [65.94744123832338]
We present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors.
Motivated by the low pass characteristics in balanced graph cut, we propose a new variant of GNN named Heatts to encode the input graph into cluster memberships.
arXiv Detail & Related papers (2020-10-09T07:35:26Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Graph Neural Networks with Composite Kernels [60.81504431653264]
We re-interpret node aggregation from the perspective of kernel weighting.
We present a framework to consider feature similarity in an aggregation scheme.
We propose feature aggregation as the composition of the original neighbor-based kernel and a learnable kernel to encode feature similarities in a feature space.
arXiv Detail & Related papers (2020-05-16T04:44:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.