Structural Optimization Makes Graph Classification Simpler and Better
- URL: http://arxiv.org/abs/2109.02027v1
- Date: Sun, 5 Sep 2021 08:54:38 GMT
- Title: Structural Optimization Makes Graph Classification Simpler and Better
- Authors: Junran Wu, Jianhao Li, Yicheng Pan, Ke Xu
- Abstract summary: We investigate the feasibility of improving graph classification performance while simplifying the model learning process.
Inspired by progress in structural information assessment, we optimize the given data sample from graphs to encoding trees.
We present an implementation of the scheme in a tree kernel and a convolutional network to perform graph classification.
- Score: 5.770986723520119
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In deep neural networks, better results can often be obtained by increasing
the complexity of previously developed basic models. However, it is unclear
whether there is a way to boost performance by decreasing the complexity of
such models. Here, based on an optimization method, we investigate the
feasibility of improving graph classification performance while simplifying the
model learning process. Inspired by progress in structural information
assessment, we optimize the given data sample from graphs to encoding trees. In
particular, we minimize the structural entropy of the transformed encoding tree
to decode the key structure underlying a graph. This transformation is denoted
as structural optimization. Furthermore, we propose a novel feature combination
scheme, termed hierarchical reporting, for encoding trees. In this scheme,
features are transferred from leaf nodes to root nodes by following the
hierarchical structures of encoding trees. We then present an implementation of
the scheme in a tree kernel and a convolutional network to perform graph
classification. The tree kernel follows label propagation in the
Weisfeiler-Lehman (WL) subtree kernel, but it has a lower runtime complexity
$O(n)$. The convolutional network is a special implementation of our tree
kernel in the deep learning field and is called Encoding Tree Learning (ETL).
We empirically validate our tree kernel and convolutional network with several
graph classification benchmarks and demonstrate that our methods achieve better
performance and lower computational consumption than competing approaches.
Related papers
- Virtual Node Generation for Node Classification in Sparsely-Labeled Graphs [2.0060301665996016]
This paper presents a novel node generation method that infuses a small set of high-quality synthesized nodes into the graph as additional labeled nodes.
It is compatible with most popular graph pre-training (self-supervised learning), semi-supervised learning, and meta-learning methods.
Our Experiments demonstrate statistically significant performance improvements over 14 baselines on 10 publicly available datasets.
arXiv Detail & Related papers (2024-09-12T02:36:44Z) - Rapid and Precise Topological Comparison with Merge Tree Neural Networks [7.443474354626664]
We introduce the Merge Tree Neural Network (MTNN), a learned neural network model designed for merge tree comparison.
We first demonstrate how to train graph neural networks, which emerged as effective encoders for graphs, in order to produce embeddings of merge trees in vector spaces.
We then formulate the novel MTNN model that further improves the similarity comparisons by integrating the tree and node embeddings with a new topological attention mechanism.
arXiv Detail & Related papers (2024-04-08T21:26:04Z) - Learning a Decision Tree Algorithm with Transformers [75.96920867382859]
We introduce MetaTree, a transformer-based model trained via meta-learning to directly produce strong decision trees.
We fit both greedy decision trees and globally optimized decision trees on a large number of datasets, and train MetaTree to produce only the trees that achieve strong generalization performance.
arXiv Detail & Related papers (2024-02-06T07:40:53Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - SE-GSL: A General and Effective Graph Structure Learning Framework
through Structural Entropy Optimization [67.28453445927825]
Graph Neural Networks (GNNs) are de facto solutions to structural data learning.
Existing graph structure learning (GSL) frameworks still lack robustness and interpretability.
This paper proposes a general GSL framework, SE-GSL, through structural entropy and the graph hierarchy abstracted in the encoding tree.
arXiv Detail & Related papers (2023-03-17T05:20:24Z) - Pushing the Efficiency Limit Using Structured Sparse Convolutions [82.31130122200578]
We propose Structured Sparse Convolution (SSC), which leverages the inherent structure in images to reduce the parameters in the convolutional filter.
We show that SSC is a generalization of commonly used layers (depthwise, groupwise and pointwise convolution) in efficient architectures''
Architectures based on SSC achieve state-of-the-art performance compared to baselines on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet classification benchmarks.
arXiv Detail & Related papers (2022-10-23T18:37:22Z) - A Simple yet Effective Method for Graph Classification [7.397201068210497]
We investigate the feasibility of improving graph classification performance while simplifying the learning process.
Inspired by structural entropy on graphs, we transform the data sample from graphs to coding trees.
We present a tree kernel and a convolutional network to implement our scheme for graph classification.
arXiv Detail & Related papers (2022-06-06T07:24:44Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z) - Graph Neural Networks with Composite Kernels [60.81504431653264]
We re-interpret node aggregation from the perspective of kernel weighting.
We present a framework to consider feature similarity in an aggregation scheme.
We propose feature aggregation as the composition of the original neighbor-based kernel and a learnable kernel to encode feature similarities in a feature space.
arXiv Detail & Related papers (2020-05-16T04:44:29Z) - The Tree Ensemble Layer: Differentiability meets Conditional Computation [8.40843862024745]
We introduce a new layer for neural networks composed of an ensemble of differentiable decision trees (a.k.a. soft trees)
Differentiable trees demonstrate promising results in the literature, but are typically slow in training and inference as they do not support conditional computation.
We develop specialized forward and backward propagation algorithms that exploit sparsity.
arXiv Detail & Related papers (2020-02-18T18:05:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.