Gaussian process regression with Sliced Wasserstein Weisfeiler-Lehman
graph kernels
- URL: http://arxiv.org/abs/2402.03838v2
- Date: Mon, 11 Mar 2024 12:16:24 GMT
- Title: Gaussian process regression with Sliced Wasserstein Weisfeiler-Lehman
graph kernels
- Authors: Rapha\"el Carpintero Perez (CMAP), S\'ebastien da Veiga (ENSAI,
CREST), Josselin Garnier (CMAP), Brian Staber
- Abstract summary: Supervised learning has recently garnered significant attention in the field of computational physics.
Traditionally, such datasets consist of inputs given as meshes with a large number of nodes representing the problem geometry.
This means the supervised learning model must be able to handle large and sparse graphs with continuous node attributes.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Supervised learning has recently garnered significant attention in the field
of computational physics due to its ability to effectively extract complex
patterns for tasks like solving partial differential equations, or predicting
material properties. Traditionally, such datasets consist of inputs given as
meshes with a large number of nodes representing the problem geometry (seen as
graphs), and corresponding outputs obtained with a numerical solver. This means
the supervised learning model must be able to handle large and sparse graphs
with continuous node attributes. In this work, we focus on Gaussian process
regression, for which we introduce the Sliced Wasserstein Weisfeiler-Lehman
(SWWL) graph kernel. In contrast to existing graph kernels, the proposed SWWL
kernel enjoys positive definiteness and a drastic complexity reduction, which
makes it possible to process datasets that were previously impossible to
handle. The new kernel is first validated on graph classification for molecular
datasets, where the input graphs have a few tens of nodes. The efficiency of
the SWWL kernel is then illustrated on graph regression in computational fluid
dynamics and solid mechanics, where the input graphs are made up of tens of
thousands of nodes.
Related papers
- NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Graph Neural Network-Inspired Kernels for Gaussian Processes in
Semi-Supervised Learning [4.644263115284322]
Graph neural networks (GNNs) emerged recently as a promising class of models for graph-structured data in semi-supervised learning.
We introduce this inductive bias into GPs to improve their predictive performance for graph-structured data.
We show that these graph-based kernels lead to competitive classification and regression performance, as well as advantages in time, compared with the respective GNNs.
arXiv Detail & Related papers (2023-02-12T01:07:56Z) - Transductive Kernels for Gaussian Processes on Graphs [7.542220697870243]
We present a novel kernel for graphs with node feature data for semi-supervised learning.
The kernel is derived from a regularization framework by treating the graph and feature data as two spaces.
We show how numerous kernel-based models on graphs are instances of our design.
arXiv Detail & Related papers (2022-11-28T14:00:50Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Semi-Supervised Learning on Graphs with Feature-Augmented Graph Basis
Functions [7.6146285961466]
We study how initial kernels in a supervised learning regime can be augmented with additional information from known priors or from unsupervised learning outputs.
As generators of the positive definite kernels, we will focus on graph basis functions (GBF) that allow to include geometric information of the graph.
Using a regularized least squares (RLS) approach for machine learning, we will test the derived augmented kernels for the classification of data on graphs.
arXiv Detail & Related papers (2020-03-17T11:21:43Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.