A Novel Explainable Out-of-Distribution Detection Approach for Spiking
Neural Networks
- URL: http://arxiv.org/abs/2210.00894v1
- Date: Fri, 30 Sep 2022 11:16:35 GMT
- Title: A Novel Explainable Out-of-Distribution Detection Approach for Spiking
Neural Networks
- Authors: Aitor Martinez Seras, Javier Del Ser, Jesus L. Lobo, Pablo
Garcia-Bringas, Nikola Kasabov
- Abstract summary: This work presents a novel OoD detector that can identify whether test examples input to a Spiking Neural Network belong to the distribution of the data over which it was trained.
We characterize the internal activations of the hidden layers of the network in the form of spike count patterns.
A local explanation method is devised to produce attribution maps revealing which parts of the input instance push most towards the detection of an example as an OoD sample.
- Score: 6.100274095771616
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Research around Spiking Neural Networks has ignited during the last years due
to their advantages when compared to traditional neural networks, including
their efficient processing and inherent ability to model complex temporal
dynamics. Despite these differences, Spiking Neural Networks face similar
issues than other neural computation counterparts when deployed in real-world
settings. This work addresses one of the practical circumstances that can
hinder the trustworthiness of this family of models: the possibility of
querying a trained model with samples far from the distribution of its training
data (also referred to as Out-of-Distribution or OoD data). Specifically, this
work presents a novel OoD detector that can identify whether test examples
input to a Spiking Neural Network belong to the distribution of the data over
which it was trained. For this purpose, we characterize the internal
activations of the hidden layers of the network in the form of spike count
patterns, which lay a basis for determining when the activations induced by a
test instance is atypical. Furthermore, a local explanation method is devised
to produce attribution maps revealing which parts of the input instance push
most towards the detection of an example as an OoD sample. Experimental results
are performed over several image classification datasets to compare the
proposed detector to other OoD detection schemes from the literature. As the
obtained results clearly show, the proposed detector performs competitively
against such alternative schemes, and produces relevance attribution maps that
conform to expectations for synthetically created OoD instances.
Related papers
- Hypothesis-Driven Deep Learning for Out of Distribution Detection [0.8191518216608217]
We propose a hypothesis-driven approach to quantify whether a new sample is InD or OoD.
We adapt our method to detect an unseen sample of bacteria to a trained deep learning model, and show that it reveals interpretable differences between InD and OoD latent responses.
arXiv Detail & Related papers (2024-03-21T01:06:47Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - on the effectiveness of generative adversarial network on anomaly
detection [1.6244541005112747]
GANs rely on the rich contextual information of these models to identify the actual training distribution.
We suggest a new unsupervised model based on GANs --a combination of an autoencoder and a GAN.
A new scoring function was introduced to target anomalies where a linear combination of the internal representation of the discriminator and the generator's visual representation, plus the encoded representation of the autoencoder, come together to define the proposed anomaly score.
arXiv Detail & Related papers (2021-12-31T16:35:47Z) - Out-of-Distribution Example Detection in Deep Neural Networks using
Distance to Modelled Embedding [0.0]
We present Distance to Modelled Embedding (DIME) that we use to detect out-of-distribution examples during prediction time.
By approximating the training set embedding into feature space as a linear hyperplane, we derive a simple, unsupervised, highly performant and computationally efficient method.
arXiv Detail & Related papers (2021-08-24T12:28:04Z) - Adversarial Examples Detection with Bayesian Neural Network [57.185482121807716]
We propose a new framework to detect adversarial examples motivated by the observations that random components can improve the smoothness of predictors.
We propose a novel Bayesian adversarial example detector, short for BATer, to improve the performance of adversarial example detection.
arXiv Detail & Related papers (2021-05-18T15:51:24Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - On the Transferability of Adversarial Attacksagainst Neural Text
Classifier [121.6758865857686]
We investigate the transferability of adversarial examples for text classification models.
We propose a genetic algorithm to find an ensemble of models that can induce adversarial examples to fool almost all existing models.
We derive word replacement rules that can be used for model diagnostics from these adversarial examples.
arXiv Detail & Related papers (2020-11-17T10:45:05Z) - Toward Scalable and Unified Example-based Explanation and Outlier
Detection [128.23117182137418]
We argue for a broader adoption of prototype-based student networks capable of providing an example-based explanation for their prediction.
We show that our prototype-based networks beyond similarity kernels deliver meaningful explanations and promising outlier detection results without compromising classification accuracy.
arXiv Detail & Related papers (2020-11-11T05:58:17Z) - Ramifications of Approximate Posterior Inference for Bayesian Deep
Learning in Adversarial and Out-of-Distribution Settings [7.476901945542385]
We show that Bayesian deep learning models on certain occasions marginally outperform conventional neural networks.
Preliminary investigations indicate the potential inherent role of bias due to choices of initialisation, architecture or activation functions.
arXiv Detail & Related papers (2020-09-03T16:58:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.