How to Explain Neural Networks: A perspective of data space division
- URL: http://arxiv.org/abs/2105.07831v1
- Date: Mon, 17 May 2021 13:43:37 GMT
- Title: How to Explain Neural Networks: A perspective of data space division
- Authors: Hangcheng Dong, Bingguo Liu, Fengdong Chen, Dong Ye and Guodong Liu
- Abstract summary: Interpretability of algorithms represented by deep learning has been yet an open problem.
We discuss the shortcomings of the existing explainable method based on the two attributes of explanation, which are called completeness and explicitness.
Based on the perspective of the data space division, the principle of complete local interpretable model-agnostic explanations (CLIMEP) is proposed in this paper.
- Score: 2.4499092754102874
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Interpretability of intelligent algorithms represented by deep learning has
been yet an open problem. We discuss the shortcomings of the existing
explainable method based on the two attributes of explanation, which are called
completeness and explicitness. Furthermore, we point out that a model that
completely relies on feed-forward mapping is extremely easy to cause
inexplicability because it is hard to quantify the relationship between this
mapping and the final model. Based on the perspective of the data space
division, the principle of complete local interpretable model-agnostic
explanations (CLIMEP) is proposed in this paper. To study the classification
problems, we further discussed the equivalence of the CLIMEP and the decision
boundary. As a matter of fact, it is also difficult to implementation of
CLIMEP. To tackle the challenge, motivated by the fact that a fully-connected
neural network (FCNN) with piece-wise linear activation functions (PWLs) can
partition the input space into several linear regions, we extend this result to
arbitrary FCNNs by the strategy of linearizing the activation functions.
Applying this technique to solving classification problems, it is the first
time that the complete decision boundary of FCNNs has been able to be obtained.
Finally, we propose the DecisionNet (DNet), which divides the input space by
the hyper-planes of the decision boundary. Hence, each linear interval of the
DNet merely contains samples of the same label. Experiments show that the
surprising model compression efficiency of the DNet with an arbitrary
controlled precision.
Related papers
- GLinSAT: The General Linear Satisfiability Neural Network Layer By Accelerated Gradient Descent [12.409030267572243]
We make a batch of neural network outputs satisfy bounded and general linear constraints.
This is the first general linear satisfiability layer in which all the operations are differentiable and matrix-factorization-free.
arXiv Detail & Related papers (2024-09-26T03:12:53Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - A cusp-capturing PINN for elliptic interface problems [0.0]
We introduce a cusp-enforced level set function as an additional feature input to the network to retain the inherent solution properties.
The proposed neural network has the advantage of being mesh-free, so it can easily handle problems in irregular domains.
We conduct a series of numerical experiments to demonstrate the effectiveness of the cusp-capturing technique and the accuracy of the present network model.
arXiv Detail & Related papers (2022-10-16T03:05:18Z) - On the Effective Number of Linear Regions in Shallow Univariate ReLU
Networks: Convergence Guarantees and Implicit Bias [50.84569563188485]
We show that gradient flow converges in direction when labels are determined by the sign of a target network with $r$ neurons.
Our result may already hold for mild over- parameterization, where the width is $tildemathcalO(r)$ and independent of the sample size.
arXiv Detail & Related papers (2022-05-18T16:57:10Z) - Deep Learning Approximation of Diffeomorphisms via Linear-Control
Systems [91.3755431537592]
We consider a control system of the form $dot x = sum_i=1lF_i(x)u_i$, with linear dependence in the controls.
We use the corresponding flow to approximate the action of a diffeomorphism on a compact ensemble of points.
arXiv Detail & Related papers (2021-10-24T08:57:46Z) - Boundary Attributions Provide Normal (Vector) Explanations [27.20904776964045]
Boundary Attribution (BA) is a new explanation method to address this question.
BA involves computing normal vectors of the local decision boundaries for the target input.
We prove two theorems for ReLU networks: BA of randomized smoothed networks or robustly trained networks is much closer to non-boundary attribution methods than that in standard networks.
arXiv Detail & Related papers (2021-03-20T22:36:39Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - An Integer Linear Programming Framework for Mining Constraints from Data [81.60135973848125]
We present a general framework for mining constraints from data.
In particular, we consider the inference in structured output prediction as an integer linear programming (ILP) problem.
We show that our approach can learn to solve 9x9 Sudoku puzzles and minimal spanning tree problems from examples without providing the underlying rules.
arXiv Detail & Related papers (2020-06-18T20:09:53Z) - Scalable Partial Explainability in Neural Networks via Flexible
Activation Functions [13.71739091287644]
High dimensional features and decisions given by deep neural networks (NN) require new algorithms and methods to expose its mechanisms.
Current state-of-the-art NN interpretation methods focus more on the direct relationship between NN outputs and inputs rather than the NN structure and operations itself.
In this paper, we achieve partially explainable learning model by symbolically explaining the role of activation functions (AF) under a scalable topology.
arXiv Detail & Related papers (2020-06-10T20:30:15Z) - Convex Geometry and Duality of Over-parameterized Neural Networks [70.15611146583068]
We develop a convex analytic approach to analyze finite width two-layer ReLU networks.
We show that an optimal solution to the regularized training problem can be characterized as extreme points of a convex set.
In higher dimensions, we show that the training problem can be cast as a finite dimensional convex problem with infinitely many constraints.
arXiv Detail & Related papers (2020-02-25T23:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.