Shapley Flow: A Graph-based Approach to Interpreting Model Predictions
- URL: http://arxiv.org/abs/2010.14592v3
- Date: Fri, 26 Feb 2021 15:49:29 GMT
- Title: Shapley Flow: A Graph-based Approach to Interpreting Model Predictions
- Authors: Jiaxuan Wang, Jenna Wiens, Scott Lundberg
- Abstract summary: Shapley Flow is a novel approach to interpreting machine learning models.
It considers the entire causal graph, and assigns credit to textitedges instead of treating nodes as the fundamental unit of credit assignment.
- Score: 12.601158020289105
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many existing approaches for estimating feature importance are problematic
because they ignore or hide dependencies among features. A causal graph, which
encodes the relationships among input variables, can aid in assigning feature
importance. However, current approaches that assign credit to nodes in the
causal graph fail to explain the entire graph. In light of these limitations,
we propose Shapley Flow, a novel approach to interpreting machine learning
models. It considers the entire causal graph, and assigns credit to
\textit{edges} instead of treating nodes as the fundamental unit of credit
assignment. Shapley Flow is the unique solution to a generalization of the
Shapley value axioms to directed acyclic graphs. We demonstrate the benefit of
using Shapley Flow to reason about the impact of a model's input on its output.
In addition to maintaining insights from existing approaches, Shapley Flow
extends the flat, set-based, view prevalent in game theory based explanation
methods to a deeper, \textit{graph-based}, view. This graph-based view enables
users to understand the flow of importance through a system, and reason about
potential interventions.
Related papers
- Invariant Graph Transformer [0.0]
In graph machine learning context, graph rationalization can enhance the model performance.
A key technique named "intervention" is applied to ensure the discriminative power of the extracted rationale subgraphs.
In this paper, we propose well-tailored intervention strategies on graph data.
arXiv Detail & Related papers (2023-12-13T02:56:26Z) - CLEAR: Generative Counterfactual Explanations on Graphs [60.30009215290265]
We study the problem of counterfactual explanation generation on graphs.
A few studies have explored counterfactual explanations on graphs, but many challenges of this problem are still not well-addressed.
We propose a novel framework CLEAR which aims to generate counterfactual explanations on graphs for graph-level prediction models.
arXiv Detail & Related papers (2022-10-16T04:35:32Z) - Interpretations Steered Network Pruning via Amortized Inferred Saliency
Maps [85.49020931411825]
Convolutional Neural Networks (CNNs) compression is crucial to deploying these models in edge devices with limited resources.
We propose to address the channel pruning problem from a novel perspective by leveraging the interpretations of a model to steer the pruning process.
We tackle this challenge by introducing a selector model that predicts real-time smooth saliency masks for pruned models.
arXiv Detail & Related papers (2022-09-07T01:12:11Z) - Graph Pooling with Maximum-Weight $k$-Independent Sets [12.251091325930837]
We introduce a graph coarsening mechanism based on the graph-theoretic concept of maximum-weight $k$-independent sets.
We prove theoretical guarantees for distortion bounds on path lengths, as well as the ability to preserve key topological properties in the coarsened graphs.
arXiv Detail & Related papers (2022-08-06T14:12:47Z) - GRAPHSHAP: Explaining Identity-Aware Graph Classifiers Through the
Language of Motifs [11.453325862543094]
GRAPHSHAP is able to provide motif-based explanations for identity-aware graph classifiers.
We show how a simple kernel can efficiently approximate explanation scores, thus allowing GRAPHSHAP to scale on scenarios with a large explanation space.
Our experiments highlight how the classification provided by a black-box model can be effectively explained by few connectomics patterns.
arXiv Detail & Related papers (2022-02-17T18:29:30Z) - Graph-wise Common Latent Factor Extraction for Unsupervised Graph
Representation Learning [40.70562886682939]
We propose a new principle for unsupervised graph representation learning: Graph-wise Common latent Factor EXtraction (GCFX)
GCFX explicitly extract common latent factors from an input graph and achieve improved results on downstream tasks to the current state-of-the-art.
Through extensive experiments and analysis, we demonstrate that GCFX is beneficial for graph-level tasks to alleviate distractions caused by local variations of individual nodes or local neighbourhoods.
arXiv Detail & Related papers (2021-12-16T12:22:49Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Towards Efficient Scene Understanding via Squeeze Reasoning [71.1139549949694]
We propose a novel framework called Squeeze Reasoning.
Instead of propagating information on the spatial map, we first learn to squeeze the input feature into a channel-wise global vector.
We show that our approach can be modularized as an end-to-end trained block and can be easily plugged into existing networks.
arXiv Detail & Related papers (2020-11-06T12:17:01Z) - Non-Parametric Graph Learning for Bayesian Graph Neural Networks [35.88239188555398]
We propose a novel non-parametric graph model for constructing the posterior distribution of graph adjacency matrices.
We demonstrate the advantages of this model in three different problem settings: node classification, link prediction and recommendation.
arXiv Detail & Related papers (2020-06-23T21:10:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.