Towards Formal Approximated Minimal Explanations of Neural Networks
- URL: http://arxiv.org/abs/2210.13915v1
- Date: Tue, 25 Oct 2022 11:06:37 GMT
- Title: Towards Formal Approximated Minimal Explanations of Neural Networks
- Authors: Shahaf Bassan and Guy Katz
- Abstract summary: Deep neural networks (DNNs) are now being used in numerous domains.
DNNs are "black-boxes", and cannot be interpreted by humans.
We propose an efficient, verification-based method for finding minimal explanations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid growth of machine learning, deep neural networks (DNNs) are
now being used in numerous domains. Unfortunately, DNNs are "black-boxes", and
cannot be interpreted by humans, which is a substantial concern in
safety-critical systems. To mitigate this issue, researchers have begun working
on explainable AI (XAI) methods, which can identify a subset of input features
that are the cause of a DNN's decision for a given input. Most existing
techniques are heuristic, and cannot guarantee the correctness of the
explanation provided. In contrast, recent and exciting attempts have shown that
formal methods can be used to generate provably correct explanations. Although
these methods are sound, the computational complexity of the underlying
verification problem limits their scalability; and the explanations they
produce might sometimes be overly complex. Here, we propose a novel approach to
tackle these limitations. We (1) suggest an efficient, verification-based
method for finding minimal explanations, which constitute a provable
approximation of the global, minimum explanation; (2) show how DNN verification
can assist in calculating lower and upper bounds on the optimal explanation;
(3) propose heuristics that significantly improve the scalability of the
verification process; and (4) suggest the use of bundles, which allows us to
arrive at more succinct and interpretable explanations. Our evaluation shows
that our approach significantly outperforms state-of-the-art techniques, and
produces explanations that are more useful to humans. We thus regard this work
as a step toward leveraging verification technology in producing DNNs that are
more reliable and comprehensible.
Related papers
- QUCE: The Minimisation and Quantification of Path-Based Uncertainty for Generative Counterfactual Explanations [1.649938899766112]
Quantified Uncertainty Counterfactual Explanations (QUCE) is a method designed to minimize path uncertainty.
We show that QUCE quantifies uncertainty when presenting explanations and generates more certain counterfactual examples.
We showcase the performance of the QUCE method by comparing it with competing methods for both path-based explanations and generative counterfactual examples.
arXiv Detail & Related papers (2024-02-27T14:00:08Z) - Formally Explaining Neural Networks within Reactive Systems [3.0579224738630595]
Deep neural networks (DNNs) are increasingly being used as controllers in reactive systems.
DNNs are highly opaque, which renders it difficult to explain and justify their actions.
We propose a formal DNN-verification-based XAI technique for reasoning about multi-step, reactive systems.
arXiv Detail & Related papers (2023-07-31T20:19:50Z) - Efficient GNN Explanation via Learning Removal-based Attribution [56.18049062940675]
We propose a framework of GNN explanation named LeArn Removal-based Attribution (LARA) to address this problem.
The explainer in LARA learns to generate removal-based attribution which enables providing explanations with high fidelity.
In particular, LARA is 3.5 times faster and achieves higher fidelity than the state-of-the-art method on the large dataset ogbn-arxiv.
arXiv Detail & Related papers (2023-06-09T08:54:20Z) - Empowering Counterfactual Reasoning over Graph Neural Networks through
Inductivity [7.094238868711952]
Graph neural networks (GNNs) have various practical applications, such as drug discovery, recommendation engines, and chip design.
Counterfactual reasoning is used to make minimal changes to the input graph of a GNN in order to alter its prediction.
arXiv Detail & Related papers (2023-06-07T23:40:18Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Task-Agnostic Graph Explanations [50.17442349253348]
Graph Neural Networks (GNNs) have emerged as powerful tools to encode graph structured data.
Existing learning-based GNN explanation approaches are task-specific in training.
We propose a Task-Agnostic GNN Explainer (TAGE) trained under self-supervision with no knowledge of downstream tasks.
arXiv Detail & Related papers (2022-02-16T21:11:47Z) - SEEN: Sharpening Explanations for Graph Neural Networks using
Explanations from Neighborhoods [0.0]
We propose a method to improve the explanation quality of node classification tasks through aggregation of auxiliary explanations.
Applying SEEN does not require modification of a graph and can be used with diverse explainability techniques.
Experiments on matching motif-participating nodes from a given graph show great improvement in explanation accuracy of up to 12.71%.
arXiv Detail & Related papers (2021-06-16T03:04:46Z) - Explainability in Graph Neural Networks: A Taxonomic Survey [42.95574260417341]
Graph neural networks (GNNs) and their explainability are experiencing rapid developments.
There is neither a unified treatment of GNN explainability methods, nor a standard benchmark and testbed for evaluations.
This work provides a unified methodological treatment of GNN explainability and a standardized testbed for evaluations.
arXiv Detail & Related papers (2020-12-31T04:34:27Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Optimization and Generalization Analysis of Transduction through
Gradient Boosting and Application to Multi-scale Graph Neural Networks [60.22494363676747]
It is known that the current graph neural networks (GNNs) are difficult to make themselves deep due to the problem known as over-smoothing.
Multi-scale GNNs are a promising approach for mitigating the over-smoothing problem.
We derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs.
arXiv Detail & Related papers (2020-06-15T17:06:17Z) - Towards an Efficient and General Framework of Robust Training for Graph
Neural Networks [96.93500886136532]
Graph Neural Networks (GNNs) have made significant advances on several fundamental inference tasks.
Despite GNNs' impressive performance, it has been observed that carefully crafted perturbations on graph structures lead them to make wrong predictions.
We propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs.
arXiv Detail & Related papers (2020-02-25T15:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.