DINE: Dimensional Interpretability of Node Embeddings
- URL: http://arxiv.org/abs/2310.01162v1
- Date: Mon, 2 Oct 2023 12:47:42 GMT
- Title: DINE: Dimensional Interpretability of Node Embeddings
- Authors: Simone Piaggesi, Megha Khosla, Andr\'e Panisson, Avishek Anand
- Abstract summary: Graph representation learning methods, such as node embeddings, are powerful approaches to map nodes into a latent vector space.
We develop new metrics that measure the global interpretability of embedding vectors.
We then introduce DINE, a novel approach that can retrofit existing node embeddings by making them more interpretable without sacrificing their task performance.
- Score: 3.3040172566302206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graphs are ubiquitous due to their flexibility in representing social and
technological systems as networks of interacting elements. Graph representation
learning methods, such as node embeddings, are powerful approaches to map nodes
into a latent vector space, allowing their use for various graph tasks. Despite
their success, only few studies have focused on explaining node embeddings
locally. Moreover, global explanations of node embeddings remain unexplored,
limiting interpretability and debugging potentials. We address this gap by
developing human-understandable explanations for dimensions in node embeddings.
Towards that, we first develop new metrics that measure the global
interpretability of embedding vectors based on the marginal contribution of the
embedding dimensions to predicting graph structure. We say that an embedding
dimension is more interpretable if it can faithfully map to an understandable
sub-structure in the input graph - like community structure. Having observed
that standard node embeddings have low interpretability, we then introduce DINE
(Dimension-based Interpretable Node Embedding), a novel approach that can
retrofit existing node embeddings by making them more interpretable without
sacrificing their task performance. We conduct extensive experiments on
synthetic and real-world graphs and show that we can simultaneously learn
highly interpretable node embeddings with effective performance in link
prediction.
Related papers
- Disentangled and Self-Explainable Node Representation Learning [1.4002424249260854]
We introduce DiSeNE, a framework that generates self-explainable embeddings in an unsupervised manner.
Our method employs disentangled representation learning to produce dimension-wise interpretable embeddings.
We formalize novel desiderata for disentangled and interpretable embeddings, which drive our new objective functions.
arXiv Detail & Related papers (2024-10-28T13:58:52Z) - DGNN: Decoupled Graph Neural Networks with Structural Consistency
between Attribute and Graph Embedding Representations [62.04558318166396]
Graph neural networks (GNNs) demonstrate a robust capability for representation learning on graphs with complex structures.
A novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced to obtain a more comprehensive embedding representation of nodes.
Experimental results conducted on several graph benchmark datasets verify DGNN's superiority in node classification task.
arXiv Detail & Related papers (2024-01-28T06:43:13Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Self-Supervised Node Representation Learning via Node-to-Neighbourhood
Alignment [10.879056662671802]
Self-supervised node representation learning aims to learn node representations from unlabelled graphs that rival the supervised counterparts.
In this work, we present simple-yet-effective self-supervised node representation learning via aligning the hidden representations of nodes and their neighbourhood.
We learn node representations that achieve promising node classification performance on a set of graph-structured datasets from small- to large-scale.
arXiv Detail & Related papers (2023-02-09T13:21:18Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - Finding MNEMON: Reviving Memories of Node Embeddings [39.206574462957136]
We show that an adversary can recover edges with decent accuracy by only gaining access to the node embedding matrix of the original graph.
We demonstrate the effectiveness and applicability of our graph recovery attack through extensive experiments.
arXiv Detail & Related papers (2022-04-14T13:44:26Z) - Towards Efficient Scene Understanding via Squeeze Reasoning [71.1139549949694]
We propose a novel framework called Squeeze Reasoning.
Instead of propagating information on the spatial map, we first learn to squeeze the input feature into a channel-wise global vector.
We show that our approach can be modularized as an end-to-end trained block and can be easily plugged into existing networks.
arXiv Detail & Related papers (2020-11-06T12:17:01Z) - Spectral Embedding of Graph Networks [76.27138343125985]
We introduce an unsupervised graph embedding that trades off local node similarity and connectivity, and global structure.
The embedding is based on a generalized graph Laplacian, whose eigenvectors compactly capture both network structure and neighborhood proximity in a single representation.
arXiv Detail & Related papers (2020-09-30T04:59:10Z) - Integrating Network Embedding and Community Outlier Detection via
Multiclass Graph Description [15.679313861083239]
We propose a novel unsupervised graph embedding approach (called DMGD) which integrates outlier and community detection with node embedding.
We show the theoretical bounds on the number of outliers detected by DMGD.
Our formulation boils down to an interesting minimax game between the outliers, community assignments and the node embedding function.
arXiv Detail & Related papers (2020-07-20T16:21:07Z) - Self-Supervised Graph Representation Learning via Global Context
Prediction [31.07584920486755]
This paper introduces a novel self-supervised strategy for graph representation learning by exploiting natural supervision provided by the data itself.
We randomly select pairs of nodes in a graph and train a well-designed neural net to predict the contextual position of one node relative to the other.
Our underlying hypothesis is that the representations learned from such within-graph context would capture the global topology of the graph and finely characterize the similarity and differentiation between nodes.
arXiv Detail & Related papers (2020-03-03T15:46:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.