DeepWalking Backwards: From Embeddings Back to Graphs
- URL: http://arxiv.org/abs/2102.08532v1
- Date: Wed, 17 Feb 2021 02:16:12 GMT
- Title: DeepWalking Backwards: From Embeddings Back to Graphs
- Authors: Sudhanshu Chanpuriya, Cameron Musco, Konstantinos Sotiropoulos, and
Charalampos E. Tsourakakis
- Abstract summary: We study whether embeddings can be inverted to (approximately) recover the graph used to generate them.
We present algorithms for accurate embedding inversion - i.e., from the low-dimensional embedding of a graph G, we can find a graph H with a very similar embedding.
Our findings are a step towards a more rigorous understanding of exactly what information embeddings encode about the input graph, and why this information is useful for learning tasks.
- Score: 22.085932117823738
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-dimensional node embeddings play a key role in analyzing graph datasets.
However, little work studies exactly what information is encoded by popular
embedding methods, and how this information correlates with performance in
downstream machine learning tasks. We tackle this question by studying whether
embeddings can be inverted to (approximately) recover the graph used to
generate them. Focusing on a variant of the popular DeepWalk method (Perozzi et
al., 2014; Qiu et al., 2018), we present algorithms for accurate embedding
inversion - i.e., from the low-dimensional embedding of a graph G, we can find
a graph H with a very similar embedding. We perform numerous experiments on
real-world networks, observing that significant information about G, such as
specific edges and bulk properties like triangle density, is often lost in H.
However, community structure is often preserved or even enhanced. Our findings
are a step towards a more rigorous understanding of exactly what information
embeddings encode about the input graph, and why this information is useful for
learning tasks.
Related papers
- Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - GiGaMAE: Generalizable Graph Masked Autoencoder via Collaborative Latent
Space Reconstruction [76.35904458027694]
Masked autoencoder models lack good generalization ability on graph data.
We propose a novel graph masked autoencoder framework called GiGaMAE.
Our results will shed light on the design of foundation models on graph-structured data.
arXiv Detail & Related papers (2023-08-18T16:30:51Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Expander Graph Propagation [0.0]
We propose an elegant approach based on propagating information over expander graphs.
We show that EGP is able to address all of the above concerns, while requiring minimal effort to set up.
We believe our analysis paves the way to a novel class of scalable methods to counter oversquashing in GNNs.
arXiv Detail & Related papers (2022-10-06T15:36:37Z) - Scaling R-GCN Training with Graph Summarization [71.06855946732296]
Training of Relation Graph Convolutional Networks (R-GCN) does not scale well with the size of the graph.
In this work, we experiment with the use of graph summarization techniques to compress the graph.
We obtain reasonable results on the AIFB, MUTAG and AM datasets.
arXiv Detail & Related papers (2022-03-05T00:28:43Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - DIG: A Turnkey Library for Diving into Graph Deep Learning Research [39.58666190541479]
DIG: Dive into Graphs is a research-oriented library that integrates and unified implementations of common graph deep learning algorithms for several advanced tasks.
For each direction, we provide unified implementations of data interfaces, common algorithms, and evaluation metrics.
arXiv Detail & Related papers (2021-03-23T15:05:10Z) - Co-embedding of Nodes and Edges with Graph Neural Networks [13.020745622327894]
Graph embedding is a way to transform and encode the data structure in high dimensional and non-Euclidean feature space.
CensNet is a general graph embedding framework, which embeds both nodes and edges to a latent feature space.
Our approach achieves or matches the state-of-the-art performance in four graph learning tasks.
arXiv Detail & Related papers (2020-10-25T22:39:31Z) - Graph Information Bottleneck for Subgraph Recognition [103.37499715761784]
We propose a framework of Graph Information Bottleneck (GIB) for the subgraph recognition problem in deep graph learning.
Under this framework, one can recognize the maximally informative yet compressive subgraph, named IB-subgraph.
We evaluate the properties of the IB-subgraph in three application scenarios: improvement of graph classification, graph interpretation and graph denoising.
arXiv Detail & Related papers (2020-10-12T09:32:20Z) - About Graph Degeneracy, Representation Learning and Scalability [2.029783382155471]
We present two techniques taking advantage of the K-Core Decomposition to reduce the time and memory consumption of walk-based Graph Representation Learning algorithms.
We evaluate the performances, expressed in terms of quality of embedding and computational resources, of the proposed techniques on several academic datasets.
arXiv Detail & Related papers (2020-09-04T09:39:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.