HAQJSK: Hierarchical-Aligned Quantum Jensen-Shannon Kernels for Graph
Classification
- URL: http://arxiv.org/abs/2211.02904v2
- Date: Tue, 8 Nov 2022 04:31:50 GMT
- Title: HAQJSK: Hierarchical-Aligned Quantum Jensen-Shannon Kernels for Graph
Classification
- Authors: Lu Bai, Lixin Cui, Yue Wang, Ming Li, Edwin R. Hancock
- Abstract summary: We propose a family of novel quantum kernels, namely the Hierarchical Aligned Quantum Jensen-Shannon Kernels (HAQJSK)
We show that the proposed HAQJSK kernels reflect richer intrinsic global graph characteristics in terms of the Continuous-Time Quantum Walk (CTQW)
Unlike the previous Quantum Jensen-Shannon Kernels associated with the QJSD and the CTQW, the proposed HAQJSK kernels can simultaneously guarantee the properties of permutation invariant and positive definiteness.
- Score: 17.95088104970343
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In this work, we propose a family of novel quantum kernels, namely the
Hierarchical Aligned Quantum Jensen-Shannon Kernels (HAQJSK), for un-attributed
graphs. Different from most existing classical graph kernels, the proposed
HAQJSK kernels can incorporate hierarchical aligned structure information
between graphs and transform graphs of random sizes into fixed-sized aligned
graph structures, i.e., the Hierarchical Transitive Aligned Adjacency Matrix of
vertices and the Hierarchical Transitive Aligned Density Matrix of the
Continuous-Time Quantum Walk (CTQW). For a pair of graphs to hand, the
resulting HAQJSK kernels are defined by measuring the Quantum Jensen-Shannon
Divergence (QJSD) between their transitive aligned graph structures. We show
that the proposed HAQJSK kernels not only reflect richer intrinsic global graph
characteristics in terms of the CTQW, but also address the drawback of
neglecting structural correspondence information arising in most existing
R-convolution kernels. Furthermore, unlike the previous Quantum Jensen-Shannon
Kernels associated with the QJSD and the CTQW, the proposed HAQJSK kernels can
simultaneously guarantee the properties of permutation invariant and positive
definiteness, explaining the theoretical advantages of the HAQJSK kernels.
Experiments indicate the effectiveness of the proposed kernels.
Related papers
- Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - LSEnet: Lorentz Structural Entropy Neural Network for Deep Graph Clustering [59.89626219328127]
Graph clustering is a fundamental problem in machine learning.
Deep learning methods achieve the state-of-the-art results in recent years, but they still cannot work without predefined cluster numbers.
We propose to address this problem from a fresh perspective of graph information theory.
arXiv Detail & Related papers (2024-05-20T05:46:41Z) - Deep Hierarchical Graph Alignment Kernels [16.574634620245487]
We introduce Deep Hierarchical Graph Alignment Kernels (DHGAK) to resolve this problem.
Specifically, the relational substructures are hierarchically aligned to cluster distributions in their deep embedding space.
DHGAK is positive semi-definite and has linear separability in the Reproducing Kernel Hilbert Space.
arXiv Detail & Related papers (2024-05-09T05:08:30Z) - Quantum Kernel Machine Learning With Continuous Variables [0.0]
The popular qubit framework has dominated recent work on quantum kernel machine learning.
There is no comparative framework to understand these concepts for continuous variable (CV) quantum computing platforms.
arXiv Detail & Related papers (2024-01-11T03:49:40Z) - Neural Tangent Kernels Motivate Graph Neural Networks with
Cross-Covariance Graphs [94.44374472696272]
We investigate NTKs and alignment in the context of graph neural networks (GNNs)
Our results establish the theoretical guarantees on the optimality of the alignment for a two-layer GNN.
These guarantees are characterized by the graph shift operator being a function of the cross-covariance between the input and the output data.
arXiv Detail & Related papers (2023-10-16T19:54:21Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - AERK: Aligned Entropic Reproducing Kernels through Continuous-time
Quantum Walks [17.95088104970343]
We develop an Aligned Entropic Reproducing Kernel (AERK) for graph classification.
For pairwise graphs, the proposed AERK kernel is defined by computing a reproducing kernel based similarity between the quantum Shannon entropies of their each pair of aligned vertices.
The experimental evaluation on standard graph datasets demonstrates that the proposed AERK kernel is able to outperform state-of-the-art graph kernels for graph classification tasks.
arXiv Detail & Related papers (2023-03-04T16:48:39Z) - QESK: Quantum-based Entropic Subtree Kernels for Graph Classification [11.51839867040302]
We propose a novel graph kernel, namely the Quantum-based Entropic Subtree Kernel (QESK) for Graph Classification.
We show how this AMM matrix can be employed to compute a series of entropic subtree representations associated with the classical Weisfeiler-Lehman (WL) algorithm.
We show that the proposed QESK kernel can significantly outperform state-of-the-art graph kernels and graph deep learning methods for graph classification problems.
arXiv Detail & Related papers (2022-12-10T07:10:03Z) - Graph Neural Network Bandits [89.31889875864599]
We consider the bandit optimization problem with the reward function defined over graph-structured data.
Key challenges in this setting are scaling to large domains, and to graphs with many nodes.
We show that graph neural networks (GNNs) can be used to estimate the reward function.
arXiv Detail & Related papers (2022-07-13T18:12:36Z) - Graph Neural Networks with Composite Kernels [60.81504431653264]
We re-interpret node aggregation from the perspective of kernel weighting.
We present a framework to consider feature similarity in an aggregation scheme.
We propose feature aggregation as the composition of the original neighbor-based kernel and a learnable kernel to encode feature similarities in a feature space.
arXiv Detail & Related papers (2020-05-16T04:44:29Z) - A Hierarchical Transitive-Aligned Graph Kernel for Un-attributed Graphs [11.51839867040302]
We develop a new graph kernel, namely the Hierarchical Transitive-Aligned kernel, by transitively aligning the vertices between graphs.
The proposed kernel can outperform state-of-the-art graph kernels on standard graph-based datasets in terms of the classification accuracy.
arXiv Detail & Related papers (2020-02-08T11:46:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.