Explaining Regression Based Neural Network Model
- URL: http://arxiv.org/abs/2004.06918v1
- Date: Wed, 15 Apr 2020 07:38:40 GMT
- Title: Explaining Regression Based Neural Network Model
- Authors: M\'egane Millan and Catherine Achard
- Abstract summary: We propose a new method, named AGRA for Accurate Gradient, based on several trainings that decrease the noise present in most state-of-the-art results.
Comparative results show that the proposed method outperforms state-of-the-art methods for locating time-steps where errors occur in the signal.
- Score: 4.94950858749529
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Several methods have been proposed to explain Deep Neural Network (DNN).
However, to our knowledge, only classification networks have been studied to
try to determine which input dimensions motivated the decision. Furthermore, as
there is no ground truth to this problem, results are only assessed
qualitatively in regards to what would be meaningful for a human. In this work,
we design an experimental settings where the ground truth can been established:
we generate ideal signals and disrupted signals with errors and learn a neural
network that determines the quality of the signals. This quality is simply a
score based on the distance between the disrupted signals and the corresponding
ideal signal. We then try to find out how the network estimated this score and
hope to find the time-step and dimensions of the signal where errors are
present. This experimental setting enables us to compare several methods for
network explanation and to propose a new method, named AGRA for Accurate
Gradient, based on several trainings that decrease the noise present in most
state-of-the-art results. Comparative results show that the proposed method
outperforms state-of-the-art methods for locating time-steps where errors occur
in the signal.
Related papers
- SEF: A Method for Computing Prediction Intervals by Shifting the Error Function in Neural Networks [0.0]
SEF (Shifting the Error Function) method presented in this paper is a new method that belongs to this category of methods.
The proposed approach involves training a single neural network three times, thus generating an estimate along with the corresponding upper and lower bounds for a given problem.
This innovative process effectively produces PIs, resulting in a robust and efficient technique for uncertainty quantification.
arXiv Detail & Related papers (2024-09-08T19:46:45Z) - Rolling bearing fault diagnosis method based on generative adversarial enhanced multi-scale convolutional neural network model [7.600902237804825]
A rolling bearing fault diagnosis method based on generative adversarial enhanced multi-scale convolutional neural network model is proposed.
Compared with ResNet method, the experimental results show that the proposed method has better generalization performance and anti-noise performance.
arXiv Detail & Related papers (2024-03-21T06:42:35Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Convolutional Neural Networks for Sleep Stage Scoring on a Two-Channel
EEG Signal [63.18666008322476]
Sleep problems are one of the major diseases all over the world.
Basic tool used by specialists is the Polysomnogram, which is a collection of different signals recorded during sleep.
Specialists have to score the different signals according to one of the standard guidelines.
arXiv Detail & Related papers (2021-03-30T09:59:56Z) - Learning Frequency Domain Approximation for Binary Neural Networks [68.79904499480025]
We propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs.
The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-03-01T08:25:26Z) - The distance between the weights of the neural network is meaningful [9.329400348695435]
In the application of neural networks, we need to select a suitable model based on the problem complexity and the dataset scale.
This paper proves that the distance between the neural network weights in different training stages can be used to estimate the information accumulated by the network in the training process directly.
arXiv Detail & Related papers (2021-01-31T06:44:49Z) - Deep Networks for Direction-of-Arrival Estimation in Low SNR [89.45026632977456]
We introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix.
We train a CNN in the low-SNR regime to predict DoAs across all SNRs.
Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.
arXiv Detail & Related papers (2020-11-17T12:52:18Z) - Training of mixed-signal optical convolutional neural network with
reduced quantization level [1.3381749415517021]
Mixed-signal artificial neural networks (ANNs) that employ analog matrix-multiplication accelerators can achieve higher speed and improved power efficiency.
Here we report a training method for mixed-signal ANN with two types of errors in its analog signals, random noise, and deterministic errors (distortions)
The results showed that mixed-signal ANNs trained with our proposed method can achieve an equivalent classification accuracy with noise level up to 50% of the ideal quantization step size.
arXiv Detail & Related papers (2020-08-20T20:46:22Z) - Effective Version Space Reduction for Convolutional Neural Networks [61.84773892603885]
In active learning, sampling bias could pose a serious inconsistency problem and hinder the algorithm from finding the optimal hypothesis.
We examine active learning with convolutional neural networks through the principled lens of version space reduction.
arXiv Detail & Related papers (2020-06-22T17:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.