AMES: A Differentiable Embedding Space Selection Framework for Latent
Graph Inference
- URL: http://arxiv.org/abs/2311.11891v1
- Date: Mon, 20 Nov 2023 16:24:23 GMT
- Title: AMES: A Differentiable Embedding Space Selection Framework for Latent
Graph Inference
- Authors: Yuan Lu, Haitz S\'aez de Oc\'ariz Borde, Pietro Li\`o
- Abstract summary: We introduce the Attentional Multi-Embedding Selection (AMES) framework, a differentiable method for selecting the best embedding space for latent graph inference.
Our framework consistently achieves comparable or superior results compared to previous methods for latent graph inference.
- Score: 6.115315198322837
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In real-world scenarios, although data entities may possess inherent
relationships, the specific graph illustrating their connections might not be
directly accessible. Latent graph inference addresses this issue by enabling
Graph Neural Networks (GNNs) to operate on point cloud data, dynamically
learning the necessary graph structure. These graphs are often derived from a
latent embedding space, which can be modeled using Euclidean, hyperbolic,
spherical, or product spaces. However, currently, there is no principled
differentiable method for determining the optimal embedding space. In this
work, we introduce the Attentional Multi-Embedding Selection (AMES) framework,
a differentiable method for selecting the best embedding space for latent graph
inference through backpropagation, considering a downstream task. Our framework
consistently achieves comparable or superior results compared to previous
methods for latent graph inference across five benchmark datasets. Importantly,
our approach eliminates the need for conducting multiple experiments to
identify the optimal embedding space. Furthermore, we explore interpretability
techniques that track the gradient contributions of different latent graphs,
shedding light on how our attention-based, fully differentiable approach learns
to choose the appropriate latent space. In line with previous works, our
experiments emphasize the advantages of hyperbolic spaces in enhancing
performance. More importantly, our interpretability framework provides a
general approach for quantitatively comparing embedding spaces across different
tasks based on their contributions, a dimension that has been overlooked in
previous literature on latent graph inference.
Related papers
- Towards Graph Prompt Learning: A Survey and Beyond [38.55555996765227]
Large-scale "pre-train and prompt learning" paradigms have demonstrated remarkable adaptability.
This survey categorizes over 100 relevant works in this field, summarizing general design principles and the latest applications.
arXiv Detail & Related papers (2024-08-26T06:36:42Z) - MS-IMAP -- A Multi-Scale Graph Embedding Approach for Interpretable Manifold Learning [1.8124328823188354]
This paper introduces a framework for multi-scale graph network embedding based on spectral graph wavelets.
We show that in Paley-Wiener spaces on graphs, the spectral graph wavelets operator provides greater flexibility and control over smoothness.
An additional key advantage of the proposed embedding is its ability to establish a correspondence between the embedding and input feature spaces.
arXiv Detail & Related papers (2024-06-04T20:48:33Z) - Improving embedding of graphs with missing data by soft manifolds [51.425411400683565]
The reliability of graph embeddings depends on how much the geometry of the continuous space matches the graph structure.
We introduce a new class of manifold, named soft manifold, that can solve this situation.
Using soft manifold for graph embedding, we can provide continuous spaces to pursue any task in data analysis over complex datasets.
arXiv Detail & Related papers (2023-11-29T12:48:33Z) - You Only Transfer What You Share: Intersection-Induced Graph Transfer
Learning for Link Prediction [79.15394378571132]
We investigate a previously overlooked phenomenon: in many cases, a densely connected, complementary graph can be found for the original graph.
The denser graph may share nodes with the original graph, which offers a natural bridge for transferring selective, meaningful knowledge.
We identify this setting as Graph Intersection-induced Transfer Learning (GITL), which is motivated by practical applications in e-commerce or academic co-authorship predictions.
arXiv Detail & Related papers (2023-02-27T22:56:06Z) - Latent Graph Inference using Product Manifolds [0.0]
We generalize the discrete Differentiable Graph Module (dDGM) for latent graph learning.
Our novel approach is tested on a wide range of datasets, and outperforms the original dDGM model.
arXiv Detail & Related papers (2022-11-26T22:13:06Z) - Data-heterogeneity-aware Mixing for Decentralized Learning [63.83913592085953]
We characterize the dependence of convergence on the relationship between the mixing weights of the graph and the data heterogeneity across nodes.
We propose a metric that quantifies the ability of a graph to mix the current gradients.
Motivated by our analysis, we propose an approach that periodically and efficiently optimize the metric.
arXiv Detail & Related papers (2022-04-13T15:54:35Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Group Contrastive Self-Supervised Learning on Graphs [101.45974132613293]
We study self-supervised learning on graphs using contrastive methods.
We argue that contrasting graphs in multiple subspaces enables graph encoders to capture more abundant characteristics.
arXiv Detail & Related papers (2021-07-20T22:09:21Z) - Understanding graph embedding methods and their applications [1.14219428942199]
Graph embedding techniques can be effective in converting high-dimensional sparse graphs into low-dimensional, dense and continuous vector spaces.
The generated nonlinear and highly informative graph embeddings in the latent space can be conveniently used to address different downstream graph analytics tasks.
arXiv Detail & Related papers (2020-12-15T00:30:22Z) - Spatial Pyramid Based Graph Reasoning for Semantic Segmentation [67.47159595239798]
We apply graph convolution into the semantic segmentation task and propose an improved Laplacian.
The graph reasoning is directly performed in the original feature space organized as a spatial pyramid.
We achieve comparable performance with advantages in computational and memory overhead.
arXiv Detail & Related papers (2020-03-23T12:28:07Z) - Which way? Direction-Aware Attributed Graph Embedding [2.429993132301275]
Graph embedding algorithms are used to efficiently represent a graph in a continuous vector space.
One aspect that is often overlooked is whether the graph is directed or not.
This study presents a novel text-enriched, direction-aware algorithm called DIAGRAM.
arXiv Detail & Related papers (2020-01-30T13:08:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.