Encoding Concepts in Graph Neural Networks
- URL: http://arxiv.org/abs/2207.13586v1
- Date: Wed, 27 Jul 2022 15:34:14 GMT
- Title: Encoding Concepts in Graph Neural Networks
- Authors: Lucie Charlotte Magister and Pietro Barbiero and Dmitry Kazhdan and
Federico Siciliano and Gabriele Ciravegna and Fabrizio Silvestri and Pietro
Lio and Mateja Jamnik
- Abstract summary: We introduce the Concept Module, the first differentiable concept-discovery approach for graph networks.
The proposed approach makes graph networks explainable by design by first discovering graph concepts and then using these to solve the task.
Our results demonstrate that this approach allows graph networks to attain model accuracy comparable with their equivalent vanilla versions.
- Score: 6.129235861306906
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The opaque reasoning of Graph Neural Networks induces a lack of human trust.
Existing graph network explainers attempt to address this issue by providing
post-hoc explanations, however, they fail to make the model itself more
interpretable. To fill this gap, we introduce the Concept Encoder Module, the
first differentiable concept-discovery approach for graph networks. The
proposed approach makes graph networks explainable by design by first
discovering graph concepts and then using these to solve the task. Our results
demonstrate that this approach allows graph networks to: (i) attain model
accuracy comparable with their equivalent vanilla versions, (ii) discover
meaningful concepts that achieve high concept completeness and purity scores,
(iii) provide high-quality concept-based logic explanations for their
prediction, and (iv) support effective interventions at test time: these can
increase human trust as well as significantly improve model performance.
Related papers
- Revealing Combinatorial Reasoning of GNNs via Graph Concept Bottleneck Layer [28.886850252681754]
We develop a graph concept layer that can be integrated into any GNN architectures.<n>The predicted concept scores are projected to class labels by the selected discriminative layer.<n>It enforces the sparse reasoning of GNNs' predictions to fit the soft logical rule over graph concepts.
arXiv Detail & Related papers (2026-03-02T16:07:24Z) - FaCT: Faithful Concept Traces for Explaining Neural Network Decisions [56.796533084868884]
Deep networks have shown remarkable performance across a wide range of tasks, yet getting a global concept-level understanding of how they function remains a key challenge.<n>We put emphasis on the faithfulness of concept-based explanations and propose a new model with model-inherent mechanistic concept-explanations.<n>Our concepts are shared across classes and, from any layer, their contribution to the logit and their input-visualization can be faithfully traced.
arXiv Detail & Related papers (2025-10-29T13:35:46Z) - GIN-Graph: A Generative Interpretation Network for Model-Level Explanation of Graph Neural Networks [0.5702263832571335]
We propose a new Generative Network for Model-Level Explanation of Graph Neural Networks (GIN-Graph)<n>GIN-Graph generates reliable and high-quality model-level explanation graphs.<n> Experimental results indicate that GIN-Graph can be applied to interpret GNNs trained on a variety of graph datasets.
arXiv Detail & Related papers (2025-03-08T22:39:36Z) - Graph Reasoning Networks [9.18586425686959]
Graph Reasoning Networks (GRNs) is a novel approach to combine the strengths of fixed and learned graph representations and a reasoning module based on a differentiable satisfiability solver.
Results on real-world datasets show comparable performance to GNNs.
Experiments on synthetic datasets demonstrate the potential of the newly proposed method.
arXiv Detail & Related papers (2024-07-08T10:53:49Z) - Foundations and Frontiers of Graph Learning Theory [81.39078977407719]
Recent advancements in graph learning have revolutionized the way to understand and analyze data with complex structures.
Graph Neural Networks (GNNs), i.e. neural network architectures designed for learning graph representations, have become a popular paradigm.
This article provides a comprehensive summary of the theoretical foundations and breakthroughs concerning the approximation and learning behaviors intrinsic to prevalent graph learning models.
arXiv Detail & Related papers (2024-07-03T14:07:41Z) - ResolvNet: A Graph Convolutional Network with multi-scale Consistency [47.98039061491647]
We introduce the concept of multi-scale consistency.
At the graph-level, multi-scale consistency refers to the fact that distinct graphs describing the same object at different resolutions should be assigned similar feature vectors.
We introduce ResolvNet, a flexible graph neural network based on the mathematical concept of resolvents.
arXiv Detail & Related papers (2023-09-30T16:46:45Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - A Theory of Link Prediction via Relational Weisfeiler-Leman on Knowledge
Graphs [6.379544211152605]
Graph neural networks are prominent models for representation learning over graph-structured data.
Our goal is to provide a systematic understanding of the landscape of graph neural networks for knowledge graphs.
arXiv Detail & Related papers (2023-02-04T17:40:03Z) - MEGAN: Multi-Explanation Graph Attention Network [1.1470070927586016]
We propose a multi-explanation graph attention network (MEGAN)
Unlike existing graph explainability methods, our network can produce node and edge attributional explanations along multiple channels.
Our attention-based network is fully differentiable and explanations can actively be trained in an explanation-supervised manner.
arXiv Detail & Related papers (2022-11-23T16:10:13Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Algorithmic Concept-based Explainable Reasoning [0.3149883354098941]
Recent research on graph neural network (GNN) models successfully applied GNNs to classical graph algorithms and optimisation problems.
Key hindrance of these approaches is their lack of explainability, since GNNs are black-box models that cannot be interpreted directly.
We introduce concept-bottleneck GNNs, which rely on a modification to the GNN readout mechanism.
arXiv Detail & Related papers (2021-07-15T17:44:51Z) - Contrastive Graph Neural Network Explanation [13.234975857626749]
Graph Neural Networks achieve remarkable results on problems with structured data but come as black-box predictors.
We argue that explicability must use graphs compliant with the distribution underlying the training data.
We present a novel Contrastive GNN Explanation technique following this paradigm.
arXiv Detail & Related papers (2020-10-26T15:32:42Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - Dynamic Inference: A New Approach Toward Efficient Video Action
Recognition [69.9658249941149]
Action recognition in videos has achieved great success recently, but it remains a challenging task due to the massive computational cost.
We propose a general dynamic inference idea to improve inference efficiency by leveraging the variation in the distinguishability of different videos.
arXiv Detail & Related papers (2020-02-09T11:09:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.