Evolving-Graph Gaussian Processes
- URL: http://arxiv.org/abs/2106.15127v1
- Date: Tue, 29 Jun 2021 07:16:04 GMT
- Title: Evolving-Graph Gaussian Processes
- Authors: David Blanco-Mulero, Markus Heinonen, Ville Kyrki
- Abstract summary: Existing approaches have focused on static structures, whereas many real graph data represent a dynamic structure, limiting the applications of GGPs.
We propose evolving-Graph Gaussian Processes (e-GGPs) to overcome this.
We demonstrate the benefits of e-GGPs over static graph Gaussian Process approaches.
- Score: 20.065168755580558
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Gaussian Processes (GGPs) provide a data-efficient solution on graph
structured domains. Existing approaches have focused on static structures,
whereas many real graph data represent a dynamic structure, limiting the
applications of GGPs. To overcome this we propose evolving-Graph Gaussian
Processes (e-GGPs). The proposed method is capable of learning the transition
function of graph vertices over time with a neighbourhood kernel to model the
connectivity and interaction changes between vertices. We assess the
performance of our method on time-series regression problems where graphs
evolve over time. We demonstrate the benefits of e-GGPs over static graph
Gaussian Process approaches.
Related papers
- EggNet: An Evolving Graph-based Graph Attention Network for Particle Track Reconstruction [0.0]
We consider a one-shot OC approach that reconstructs particle tracks directly from a set of hits.
This approach iteratively updates the graphs and can better facilitate the message passing across each graph.
Preliminary studies on the TrackML dataset show better track performance compared to the methods that require a fixed input graph.
arXiv Detail & Related papers (2024-07-18T22:29:24Z) - Learning Long Range Dependencies on Graphs via Random Walks [6.7864586321550595]
Message-passing graph neural networks (GNNs) excel at capturing local relationships but struggle with long-range dependencies in graphs.
graph transformers (GTs) enable global information exchange but often oversimplify the graph structure by representing graphs as sets of fixed-length vectors.
This work introduces a novel architecture that overcomes the shortcomings of both approaches by combining the long-range information of random walks with local message passing.
arXiv Detail & Related papers (2024-06-05T15:36:57Z) - Graph Condensation for Open-World Graph Learning [48.38802327346445]
Graph condensation (GC) has emerged as a promising acceleration solution for efficiently training graph neural networks (GNNs)
Existing GC methods are limited to aligning the condensed graph with merely the observed static graph distribution.
In real-world scenarios, however, graphs are dynamic and constantly evolving, with new nodes and edges being continually integrated.
We propose OpenGC, a robust GC framework that integrates structure-aware distribution shift to simulate evolving graph patterns.
arXiv Detail & Related papers (2024-05-27T09:47:09Z) - GraphGDP: Generative Diffusion Processes for Permutation Invariant Graph
Generation [43.196067037856515]
Graph generative models have broad applications in biology, chemistry and social science.
Current leading autoregressive models fail to capture the permutation invariance nature of graphs.
We propose a continuous-time generative diffusion process for permutation invariant graph generation.
arXiv Detail & Related papers (2022-12-04T15:12:44Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Score-based Generative Modeling of Graphs via the System of Stochastic
Differential Equations [57.15855198512551]
We propose a novel score-based generative model for graphs with a continuous-time framework.
We show that our method is able to generate molecules that lie close to the training distribution yet do not violate the chemical valency rule.
arXiv Detail & Related papers (2022-02-05T08:21:04Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Dirichlet Graph Variational Autoencoder [65.94744123832338]
We present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors.
Motivated by the low pass characteristics in balanced graph cut, we propose a new variant of GNN named Heatts to encode the input graph into cluster memberships.
arXiv Detail & Related papers (2020-10-09T07:35:26Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Graph Convolutional Gaussian Processes For Link Prediction [0.0]
Link prediction aims to reveal missing edges in a graph.
We introduce a variational inducing point method that places pseudo inputs on a graph-structured domain.
We evaluate our model on eight large graphs with up to thousands of nodes and report consistent improvements.
arXiv Detail & Related papers (2020-02-11T12:12:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.