Learning subtree pattern importance for Weisfeiler-Lehmanbased graph
kernels
- URL: http://arxiv.org/abs/2106.04739v1
- Date: Tue, 8 Jun 2021 23:47:44 GMT
- Title: Learning subtree pattern importance for Weisfeiler-Lehmanbased graph
kernels
- Authors: Dai Hai Nguyen, Canh Hao Nguyen and Hiroshi Mamitsuka
- Abstract summary: We propose a method to learn the weights of subtree patterns in the framework of WWL kernels.
We present an efficient learning algorithm and also derive a generalization gap bound to show its convergence.
- Score: 15.139294083028782
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph is an usual representation of relational data, which are ubiquitous in
manydomains such as molecules, biological and social networks. A popular
approach to learningwith graph structured data is to make use of graph kernels,
which measure the similaritybetween graphs and are plugged into a kernel
machine such as a support vector machine.Weisfeiler-Lehman (WL) based graph
kernels, which employ WL labeling scheme to extract subtree patterns and
perform node embedding, are demonstrated to achieve great performance while
being efficiently computable. However, one of the main drawbacks of ageneral
kernel is the decoupling of kernel construction and learning process. For
moleculargraphs, usual kernels such as WL subtree, based on substructures of
the molecules, consider all available substructures having the same importance,
which might not be suitable inpractice. In this paper, we propose a method to
learn the weights of subtree patterns in the framework of WWL kernels, the
state of the art method for graph classification task [14]. To overcome the
computational issue on large scale data sets, we present an efficient learning
algorithm and also derive a generalization gap bound to show its convergence.
Finally, through experiments on synthetic and real-world data sets, we
demonstrate the effectiveness of our proposed method for learning the weights
of subtree patterns.
Related papers
- Exploring Consistency in Graph Representations:from Graph Kernels to Graph Neural Networks [4.235378870514331]
Graph Networks (GNNs) have emerged as a dominant approach in graph representation learning.
We bridge the gap between neural network methods and kernel approaches by enabling GNNs to consistently capture structures in their learned representations.
Inspired by these findings, we conjecture that the consistency in the similarities of graph representations across GNN layers is crucial in capturing relational structures and enhancing graph classification performance.
arXiv Detail & Related papers (2024-10-31T09:07:08Z) - Bures-Wasserstein Means of Graphs [60.42414991820453]
We propose a novel framework for defining a graph mean via embeddings in the space of smooth graph signal distributions.
By finding a mean in this embedding space, we can recover a mean graph that preserves structural information.
We establish the existence and uniqueness of the novel graph mean, and provide an iterative algorithm for computing it.
arXiv Detail & Related papers (2023-05-31T11:04:53Z) - Labeled Subgraph Entropy Kernel [11.812319306576816]
We propose a novel labeled subgraph entropy graph kernel, which performs well in structural similarity assessment.
We design a dynamic programming subgraph enumeration algorithm, which effectively reduces the time complexity.
In order to test our method, we apply several real-world datasets and assess the effects in different tasks.
arXiv Detail & Related papers (2023-03-21T12:27:21Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - Effective and Efficient Graph Learning for Multi-view Clustering [173.8313827799077]
We propose an effective and efficient graph learning model for multi-view clustering.
Our method exploits the view-similar between graphs of different views by the minimization of tensor Schatten p-norm.
Our proposed algorithm is time-economical and obtains the stable results and scales well with the data size.
arXiv Detail & Related papers (2021-08-15T13:14:28Z) - Representation Learning of Reconstructed Graphs Using Random Walk Graph
Convolutional Network [12.008472517000651]
We propose wGCN -- a novel framework that utilizes random walk to obtain the node-specific mesoscopic structures of the graph.
We believe that combining high-order local structural information can more efficiently explore the potential of the network.
arXiv Detail & Related papers (2021-01-02T10:31:14Z) - Structured Graph Learning for Clustering and Semi-supervised
Classification [74.35376212789132]
We propose a graph learning framework to preserve both the local and global structure of data.
Our method uses the self-expressiveness of samples to capture the global structure and adaptive neighbor approach to respect the local structure.
Our model is equivalent to a combination of kernel k-means and k-means methods under certain condition.
arXiv Detail & Related papers (2020-08-31T08:41:20Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Graph Neural Networks with Composite Kernels [60.81504431653264]
We re-interpret node aggregation from the perspective of kernel weighting.
We present a framework to consider feature similarity in an aggregation scheme.
We propose feature aggregation as the composition of the original neighbor-based kernel and a learnable kernel to encode feature similarities in a feature space.
arXiv Detail & Related papers (2020-05-16T04:44:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.