A copula-based visualization technique for a neural network
- URL: http://arxiv.org/abs/2003.12317v1
- Date: Fri, 27 Mar 2020 10:32:27 GMT
- Title: A copula-based visualization technique for a neural network
- Authors: Yusuke Kubo, Yuto Komori, Toyonobu Okuyama, Hiroshi Tokieda
- Abstract summary: Interpretability of machine learning is defined as the extent to which humans can comprehend the reason of a decision.
We propose a new algorithm that reveals which feature values the trained neural network considers important.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Interpretability of machine learning is defined as the extent to which humans
can comprehend the reason of a decision. However, a neural network is not
considered interpretable due to the ambiguity in its decision-making process.
Therefore, in this study, we propose a new algorithm that reveals which feature
values the trained neural network considers important and which paths are
mainly traced in the process of decision-making. In the proposed algorithm, the
score estimated by the correlation coefficients between the neural network
layers that can be calculated by applying the concept of a pair copula was
defined. We compared the estimated score with the feature importance values of
Random Forest, which is sometimes regarded as a highly interpretable algorithm,
in the experiment and confirmed that the results were consistent with each
other. This algorithm suggests an approach for compressing a neural network and
its parameter tuning because the algorithm identifies the paths that contribute
to the classification or prediction results.
Related papers
- SEF: A Method for Computing Prediction Intervals by Shifting the Error Function in Neural Networks [0.0]
SEF (Shifting the Error Function) method presented in this paper is a new method that belongs to this category of methods.
The proposed approach involves training a single neural network three times, thus generating an estimate along with the corresponding upper and lower bounds for a given problem.
This innovative process effectively produces PIs, resulting in a robust and efficient technique for uncertainty quantification.
arXiv Detail & Related papers (2024-09-08T19:46:45Z) - Probabilistic Verification of Neural Networks using Branch and Bound [3.0567348883549816]
Probabilistic verification of neural networks is concerned with formally analysing the output of a neural network under a probability distribution of the inputs.
We present a new algorithm for quantifying the probabilistic verification of neural networks based on an algorithm for computing.
arXiv Detail & Related papers (2024-05-27T18:00:03Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - A new approach to generalisation error of machine learning algorithms:
Estimates and convergence [0.0]
We introduce a new approach to the estimation of the (generalisation) error and to convergence.
Our results include estimates of the error without any structural assumption on the neural networks.
arXiv Detail & Related papers (2023-06-23T20:57:31Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Semantic Strengthening of Neuro-Symbolic Learning [85.6195120593625]
Neuro-symbolic approaches typically resort to fuzzy approximations of a probabilistic objective.
We show how to compute this efficiently for tractable circuits.
We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles.
arXiv Detail & Related papers (2023-02-28T00:04:22Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - PAC-Bayesian Learning of Aggregated Binary Activated Neural Networks
with Probabilities over Representations [2.047424180164312]
We study the expectation of a probabilistic neural network as a predictor by itself, focusing on the aggregation of binary activated neural networks with normal distributions over real-valued weights.
We show that the exact computation remains tractable for deep but narrow neural networks, thanks to a dynamic programming approach.
arXiv Detail & Related papers (2021-10-28T14:11:07Z) - Learning Structures for Deep Neural Networks [99.8331363309895]
We propose to adopt the efficient coding principle, rooted in information theory and developed in computational neuroscience.
We show that sparse coding can effectively maximize the entropy of the output signals.
Our experiments on a public image classification dataset demonstrate that using the structure learned from scratch by our proposed algorithm, one can achieve a classification accuracy comparable to the best expert-designed structure.
arXiv Detail & Related papers (2021-05-27T12:27:24Z) - The Connection Between Approximation, Depth Separation and Learnability
in Neural Networks [70.55686685872008]
We study the connection between learnability and approximation capacity.
We show that learnability with deep networks of a target function depends on the ability of simpler classes to approximate the target.
arXiv Detail & Related papers (2021-01-31T11:32:30Z) - A biologically plausible neural network for local supervision in
cortical microcircuits [17.00937011213428]
We derive an algorithm for training a neural network which avoids explicit error and backpropagation.
Our algorithm maps onto a neural network that bears a remarkable resemblance to the connectivity structure and learning rules of the cortex.
arXiv Detail & Related papers (2020-11-30T17:35:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.