Negative Flux Aggregation to Estimate Feature Attributions
- URL: http://arxiv.org/abs/2301.06989v2
- Date: Sat, 13 May 2023 08:47:31 GMT
- Title: Negative Flux Aggregation to Estimate Feature Attributions
- Authors: Xin Li, Deng Pan, Chengyin Li, Yao Qiang and Dongxiao Zhu
- Abstract summary: There are increasing demands for understanding deep neural networks' (DNNs) behavior spurred by growing security and/or transparency concerns.
To enhance the explainability of DNNs, we estimate the input feature's attributions to the prediction task using divergence and flux.
Inspired by the divergence theorem in vector analysis, we develop a novel Negative Flux Aggregation (NeFLAG) formulation and an efficient approximation algorithm to estimate attribution map.
- Score: 15.411534490483495
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There are increasing demands for understanding deep neural networks' (DNNs)
behavior spurred by growing security and/or transparency concerns. Due to
multi-layer nonlinearity of the deep neural network architectures, explaining
DNN predictions still remains as an open problem, preventing us from gaining a
deeper understanding of the mechanisms. To enhance the explainability of DNNs,
we estimate the input feature's attributions to the prediction task using
divergence and flux. Inspired by the divergence theorem in vector analysis, we
develop a novel Negative Flux Aggregation (NeFLAG) formulation and an efficient
approximation algorithm to estimate attribution map. Unlike the previous
techniques, ours doesn't rely on fitting a surrogate model nor need any path
integration of gradients. Both qualitative and quantitative experiments
demonstrate a superior performance of NeFLAG in generating more faithful
attribution maps than the competing methods. Our code is available at
\url{https://github.com/xinli0928/NeFLAG}
Related papers
- Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Empowering Counterfactual Reasoning over Graph Neural Networks through
Inductivity [7.094238868711952]
Graph neural networks (GNNs) have various practical applications, such as drug discovery, recommendation engines, and chip design.
Counterfactual reasoning is used to make minimal changes to the input graph of a GNN in order to alter its prediction.
arXiv Detail & Related papers (2023-06-07T23:40:18Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - On Consistency in Graph Neural Network Interpretation [34.25952902469481]
Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions.
Various algorithms are proposed, but most of them formalize this task by searching the minimal subgraph.
We propose a simple yet effective countermeasure by aligning embeddings.
arXiv Detail & Related papers (2022-05-27T02:58:07Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Explainable Artificial Intelligence for Bayesian Neural Networks:
Towards trustworthy predictions of ocean dynamics [0.0]
The trustworthiness of neural networks is often challenged because they lack the ability to express uncertainty and explain their skill.
This can be problematic given the increasing use of neural networks in high stakes decision-making such as in climate change applications.
We address both issues by successfully implementing a Bayesian Neural Network (BNN), where parameters are distributions rather than deterministic, and applying novel implementations of explainable AI (XAI) techniques.
arXiv Detail & Related papers (2022-04-30T08:35:57Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - How Neural Networks Extrapolate: From Feedforward to Graph Neural
Networks [80.55378250013496]
We study how neural networks trained by gradient descent extrapolate what they learn outside the support of the training distribution.
Graph Neural Networks (GNNs) have shown some success in more complex tasks.
arXiv Detail & Related papers (2020-09-24T17:48:59Z) - Streaming Probabilistic Deep Tensor Factorization [27.58928876734886]
We propose SPIDER, a Streaming ProbabilistIc Deep tEnsoR factorization method.
We develop an efficient streaming posterior inference algorithm in the assumed-density-filtering and expectation propagation framework.
We show the advantages of our approach in four real-world applications.
arXiv Detail & Related papers (2020-07-14T21:25:39Z) - Optimization and Generalization Analysis of Transduction through
Gradient Boosting and Application to Multi-scale Graph Neural Networks [60.22494363676747]
It is known that the current graph neural networks (GNNs) are difficult to make themselves deep due to the problem known as over-smoothing.
Multi-scale GNNs are a promising approach for mitigating the over-smoothing problem.
We derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs.
arXiv Detail & Related papers (2020-06-15T17:06:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.