Probing Graph Representations
- URL: http://arxiv.org/abs/2303.03951v1
- Date: Tue, 7 Mar 2023 14:58:18 GMT
- Title: Probing Graph Representations
- Authors: Mohammad Sadegh Akhondzadeh, Vijay Lingam and Aleksandar Bojchevski
- Abstract summary: We use a probing framework to quantify the amount of meaningful information captured in graph representations.
Our findings on molecular datasets show the potential of probing for understanding the inductive biases of graph-based models.
We advocate for probing as a useful diagnostic tool for evaluating graph-based models.
- Score: 77.7361299039905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Today we have a good theoretical understanding of the representational power
of Graph Neural Networks (GNNs). For example, their limitations have been
characterized in relation to a hierarchy of Weisfeiler-Lehman (WL) isomorphism
tests. However, we do not know what is encoded in the learned representations.
This is our main question. We answer it using a probing framework to quantify
the amount of meaningful information captured in graph representations. Our
findings on molecular datasets show the potential of probing for understanding
the inductive biases of graph-based models. We compare different families of
models and show that transformer-based models capture more chemically relevant
information compared to models based on message passing. We also study the
effect of different design choices such as skip connections and virtual nodes.
We advocate for probing as a useful diagnostic tool for evaluating graph-based
models.
Related papers
- Neural Scaling Laws on Graphs [54.435688297561015]
We study neural scaling laws on graphs from both model and data perspectives.
For model scaling, we investigate the phenomenon of scaling law collapse and identify overfitting as the potential reason.
For data scaling, we suggest that the number of graphs can not effectively metric the graph data volume in scaling law since the sizes of different graphs are highly irregular.
arXiv Detail & Related papers (2024-02-03T06:17:21Z) - From Shallow to Deep: Compositional Reasoning over Graphs for Visual
Question Answering [3.7094119304085584]
It is essential to learn to answer deeper questions that require compositional reasoning on the image and external knowledge.
We propose a Hierarchical Graph Neural Module Network (HGNMN) that reasons over multi-layer graphs with neural modules.
Our model consists of several well-designed neural modules that perform specific functions over graphs.
arXiv Detail & Related papers (2022-06-25T02:20:02Z) - Can Language Models Capture Graph Semantics? From Graphs to Language
Model and Vice-Versa [5.340730281227837]
We conduct a study to examine if the deep learning model can compress a graph and then output the same graph with most of the semantics intact.
Our experiments show that Transformer models are not able to express the full semantics of the input knowledge graph.
arXiv Detail & Related papers (2022-06-18T18:12:20Z) - Representation Power of Graph Neural Networks: Improved Expressivity via
Algebraic Analysis [124.97061497512804]
We show that standard Graph Neural Networks (GNNs) produce more discriminative representations than the Weisfeiler-Lehman (WL) algorithm.
We also show that simple convolutional architectures with white inputs, produce equivariant features that count the closed paths in the graph.
arXiv Detail & Related papers (2022-05-19T18:40:25Z) - OrphicX: A Causality-Inspired Latent Variable Model for Interpreting
Graph Neural Networks [42.539085765796976]
This paper proposes a new eXplanation framework, called OrphicX, for generating causal explanations for graph neural networks (GNNs)
We construct a distinct generative model and design an objective function that encourages the generative model to produce causal, compact, and faithful explanations.
We show that OrphicX can effectively identify the causal semantics for generating causal explanations, significantly outperforming its alternatives.
arXiv Detail & Related papers (2022-03-29T03:08:33Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - GraphSVX: Shapley Value Explanations for Graph Neural Networks [81.83769974301995]
Graph Neural Networks (GNNs) achieve significant performance for various learning tasks on geometric data.
In this paper, we propose a unified framework satisfied by most existing GNN explainers.
We introduce GraphSVX, a post hoc local model-agnostic explanation method specifically designed for GNNs.
arXiv Detail & Related papers (2021-04-18T10:40:37Z) - Non-Parametric Graph Learning for Bayesian Graph Neural Networks [35.88239188555398]
We propose a novel non-parametric graph model for constructing the posterior distribution of graph adjacency matrices.
We demonstrate the advantages of this model in three different problem settings: node classification, link prediction and recommendation.
arXiv Detail & Related papers (2020-06-23T21:10:55Z) - A Heterogeneous Graph with Factual, Temporal and Logical Knowledge for
Question Answering Over Dynamic Contexts [81.4757750425247]
We study question answering over a dynamic textual environment.
We develop a graph neural network over the constructed graph, and train the model in an end-to-end manner.
arXiv Detail & Related papers (2020-04-25T04:53:54Z) - The Power of Graph Convolutional Networks to Distinguish Random Graph
Models: Short Version [27.544219236164764]
Graph convolutional networks (GCNs) are a widely used method for graph representation learning.
We investigate the power of GCNs to distinguish between different random graph models on the basis of the embeddings of their sample graphs.
arXiv Detail & Related papers (2020-02-13T17:58:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.