Latent Graph Inference using Product Manifolds
- URL: http://arxiv.org/abs/2211.16199v3
- Date: Tue, 27 Jun 2023 17:17:50 GMT
- Title: Latent Graph Inference using Product Manifolds
- Authors: Haitz S\'aez de Oc\'ariz Borde, Anees Kazi, Federico Barbero, Pietro
Li\`o
- Abstract summary: We generalize the discrete Differentiable Graph Module (dDGM) for latent graph learning.
Our novel approach is tested on a wide range of datasets, and outperforms the original dDGM model.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks usually rely on the assumption that the graph topology
is available to the network as well as optimal for the downstream task. Latent
graph inference allows models to dynamically learn the intrinsic graph
structure of problems where the connectivity patterns of data may not be
directly accessible. In this work, we generalize the discrete Differentiable
Graph Module (dDGM) for latent graph learning. The original dDGM architecture
used the Euclidean plane to encode latent features based on which the latent
graphs were generated. By incorporating Riemannian geometry into the model and
generating more complex embedding spaces, we can improve the performance of the
latent graph inference system. In particular, we propose a computationally
tractable approach to produce product manifolds of constant curvature model
spaces that can encode latent features of varying structure. The latent
representations mapped onto the inferred product manifold are used to compute
richer similarity measures that are leveraged by the latent graph learning
model to obtain optimized latent graphs. Moreover, the curvature of the product
manifold is learned during training alongside the rest of the network
parameters and based on the downstream task, rather than it being a static
embedding space. Our novel approach is tested on a wide range of datasets, and
outperforms the original dDGM model.
Related papers
- Improving embedding of graphs with missing data by soft manifolds [51.425411400683565]
The reliability of graph embeddings depends on how much the geometry of the continuous space matches the graph structure.
We introduce a new class of manifold, named soft manifold, that can solve this situation.
Using soft manifold for graph embedding, we can provide continuous spaces to pursue any task in data analysis over complex datasets.
arXiv Detail & Related papers (2023-11-29T12:48:33Z) - Equivariant Neural Operator Learning with Graphon Convolution [12.059797539633506]
We propose a general architecture that combines the learning coefficient scheme with a residual operator layer for learning mappings between continuous functions in the 3D Euclidean space.
arXiv Detail & Related papers (2023-11-17T23:28:22Z) - GraphGLOW: Universal and Generalizable Structure Learning for Graph
Neural Networks [72.01829954658889]
This paper introduces the mathematical definition of this novel problem setting.
We devise a general framework that coordinates a single graph-shared structure learner and multiple graph-specific GNNs.
The well-trained structure learner can directly produce adaptive structures for unseen target graphs without any fine-tuning.
arXiv Detail & Related papers (2023-06-20T03:33:22Z) - GrannGAN: Graph annotation generative adversarial networks [72.66289932625742]
We consider the problem of modelling high-dimensional distributions and generating new examples of data with complex relational feature structure coherent with a graph skeleton.
The model we propose tackles the problem of generating the data features constrained by the specific graph structure of each data point by splitting the task into two phases.
In the first it models the distribution of features associated with the nodes of the given graph, in the second it complements the edge features conditionally on the node features.
arXiv Detail & Related papers (2022-12-01T11:49:07Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - A Deep Latent Space Model for Graph Representation Learning [10.914558012458425]
We propose a Deep Latent Space Model (DLSM) for directed graphs to incorporate the traditional latent variable based generative model into deep learning frameworks.
Our proposed model consists of a graph convolutional network (GCN) encoder and a decoder, which are layer-wise connected by a hierarchical variational auto-encoder architecture.
Experiments on real-world datasets show that the proposed model achieves the state-of-the-art performances on both link prediction and community detection tasks.
arXiv Detail & Related papers (2021-06-22T12:41:19Z) - GRAND: Graph Neural Diffusion [15.00135729657076]
We present Graph Neural Diffusion (GRAND) that approaches deep learning on graphs as a continuous diffusion process.
In our model, the layer structure and topology correspond to the discretisation choices of temporal and spatial operators.
Key to the success of our models are stability with respect to perturbations in the data and this is addressed for both implicit and explicit discretisation schemes.
arXiv Detail & Related papers (2021-06-21T09:10:57Z) - Regularization of Mixture Models for Robust Principal Graph Learning [0.0]
A regularized version of Mixture Models is proposed to learn a principal graph from a distribution of $D$-dimensional data points.
Parameters of the model are iteratively estimated through an Expectation-Maximization procedure.
arXiv Detail & Related papers (2021-06-16T18:00:02Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z) - The Power of Graph Convolutional Networks to Distinguish Random Graph
Models: Short Version [27.544219236164764]
Graph convolutional networks (GCNs) are a widely used method for graph representation learning.
We investigate the power of GCNs to distinguish between different random graph models on the basis of the embeddings of their sample graphs.
arXiv Detail & Related papers (2020-02-13T17:58:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.