Learning to Predict Graphs with Fused Gromov-Wasserstein Barycenters
- URL: http://arxiv.org/abs/2202.03813v1
- Date: Tue, 8 Feb 2022 12:15:39 GMT
- Title: Learning to Predict Graphs with Fused Gromov-Wasserstein Barycenters
- Authors: Luc Brogat-Motte, R\'emi Flamary, C\'eline Brouard, Juho Rousu,
Florence d'Alch\'e-Buc
- Abstract summary: We formulate the problem as regression with the Fused Gromov-Wasserstein (FGW) loss.
We propose a predictive model relying on a FGW barycenter whose weights depend on inputs.
- Score: 2.169919643934826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces a novel and generic framework to solve the flagship
task of supervised labeled graph prediction by leveraging Optimal Transport
tools. We formulate the problem as regression with the Fused Gromov-Wasserstein
(FGW) loss and propose a predictive model relying on a FGW barycenter whose
weights depend on inputs. First we introduce a non-parametric estimator based
on kernel ridge regression for which theoretical results such as consistency
and excess risk bound are proved. Next we propose an interpretable parametric
model where the barycenter weights are modeled with a neural network and the
graphs on which the FGW barycenter is calculated are additionally learned.
Numerical experiments show the strength of the method and its ability to
interpolate in the labeled graph space on simulated data and on a difficult
metabolic identification problem where it can reach very good performance with
very little engineering.
Related papers
- An Optimal Transport Approach for Network Regression [0.6238182916866519]
We build upon recent developments in generalized regression models on metric spaces based on Fr'echet means.
We propose a network regression method using the Wasserstein metric.
arXiv Detail & Related papers (2024-06-18T02:03:07Z) - Gaussian process regression with Sliced Wasserstein Weisfeiler-Lehman
graph kernels [0.0]
Supervised learning has recently garnered significant attention in the field of computational physics.
Traditionally, such datasets consist of inputs given as meshes with a large number of nodes representing the problem geometry.
This means the supervised learning model must be able to handle large and sparse graphs with continuous node attributes.
arXiv Detail & Related papers (2024-02-06T09:35:40Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - Graph Out-of-Distribution Generalization with Controllable Data
Augmentation [51.17476258673232]
Graph Neural Network (GNN) has demonstrated extraordinary performance in classifying graph properties.
Due to the selection bias of training and testing data, distribution deviation is widespread.
We propose OOD calibration to measure the distribution deviation of virtual samples.
arXiv Detail & Related papers (2023-08-16T13:10:27Z) - Random Grid Neural Processes for Parametric Partial Differential
Equations [5.244037702157957]
We introduce a new class of spatially probabilistic physics and data informed deep latent models for PDEs.
We solve forward and inverse problems for parametric PDEs in a way that leads to the construction of Gaussian process models of solution fields.
We show how to incorporate noisy data in a principled manner into our physics informed model to improve predictions for problems where data may be available.
arXiv Detail & Related papers (2023-01-26T11:30:56Z) - A hybrid data driven-physics constrained Gaussian process regression
framework with deep kernel for uncertainty quantification [21.972192114861873]
We propose a hybrid data driven-physics constrained Gaussian process regression framework.
We encode the physics knowledge with Boltzmann-Gibbs distribution and derive our model through maximum likelihood (ML) approach.
The proposed model achieves good results in high-dimensional problem, and correctly propagate the uncertainty, with very limited labelled data provided.
arXiv Detail & Related papers (2022-05-13T07:53:49Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - T-LoHo: A Bayesian Regularization Model for Structured Sparsity and
Smoothness on Graphs [0.0]
In graph-structured data, structured sparsity and smoothness tend to cluster together.
We propose a new prior for high dimensional parameters with graphical relations.
We use it to detect structured sparsity and smoothness simultaneously.
arXiv Detail & Related papers (2021-07-06T10:10:03Z) - Hyperbolic Variational Graph Neural Network for Modeling Dynamic Graphs [77.33781731432163]
We learn dynamic graph representation in hyperbolic space, for the first time, which aims to infer node representations.
We present a novel Hyperbolic Variational Graph Network, referred to as HVGNN.
In particular, to model the dynamics, we introduce a Temporal GNN (TGNN) based on a theoretically grounded time encoding approach.
arXiv Detail & Related papers (2021-04-06T01:44:15Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.