Forensicability of Deep Neural Network Inference Pipelines
- URL: http://arxiv.org/abs/2102.00921v1
- Date: Mon, 1 Feb 2021 15:41:49 GMT
- Title: Forensicability of Deep Neural Network Inference Pipelines
- Authors: Alexander Schl\"ogl, Tobias Kupek, Rainer B\"ohme
- Abstract summary: We propose methods to infer properties of the execution environment of machine learning pipelines by tracing characteristic numerical deviations in observable outputs.
Results from a series of proof-of-concept experiments give rise to possible forensic applications, such as the identification of the hardware platform used to produce deep neural network predictions.
- Score: 68.8204255655161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose methods to infer properties of the execution environment of
machine learning pipelines by tracing characteristic numerical deviations in
observable outputs. Results from a series of proof-of-concept experiments
obtained on local and cloud-hosted machines give rise to possible forensic
applications, such as the identification of the hardware platform used to
produce deep neural network predictions. Finally, we introduce boundary samples
that amplify the numerical deviations in order to distinguish machines by their
predicted label only.
Related papers
- Signal Processing for Implicit Neural Representations [80.38097216996164]
Implicit Neural Representations (INRs) encode continuous multi-media data via multi-layer perceptrons.
Existing works manipulate such continuous representations via processing on their discretized instance.
We propose an implicit neural signal processing network, dubbed INSP-Net, via differential operators on INR.
arXiv Detail & Related papers (2022-10-17T06:29:07Z) - A Novel Explainable Out-of-Distribution Detection Approach for Spiking
Neural Networks [6.100274095771616]
This work presents a novel OoD detector that can identify whether test examples input to a Spiking Neural Network belong to the distribution of the data over which it was trained.
We characterize the internal activations of the hidden layers of the network in the form of spike count patterns.
A local explanation method is devised to produce attribution maps revealing which parts of the input instance push most towards the detection of an example as an OoD sample.
arXiv Detail & Related papers (2022-09-30T11:16:35Z) - Variational Neural Networks [88.24021148516319]
We propose a method for uncertainty estimation in neural networks called Variational Neural Network (VNN)
VNN generates parameters for the output distribution of a layer by transforming its inputs with learnable sub-layers.
In uncertainty quality estimation experiments, we show that VNNs achieve better uncertainty quality than Monte Carlo Dropout or Bayes By Backpropagation methods.
arXiv Detail & Related papers (2022-07-04T15:41:02Z) - Generalized multiscale feature extraction for remaining useful life
prediction of bearings with generative adversarial networks [4.988898367111902]
Bearing is a key component in industrial machinery and its failure may lead to unwanted downtime and economic loss.
It is necessary to predict the remaining useful life (RUL) of bearings.
We propose a novel generalized multiscale feature extraction method with generative adversarial networks.
arXiv Detail & Related papers (2021-09-26T07:11:55Z) - Out-of-Distribution Example Detection in Deep Neural Networks using
Distance to Modelled Embedding [0.0]
We present Distance to Modelled Embedding (DIME) that we use to detect out-of-distribution examples during prediction time.
By approximating the training set embedding into feature space as a linear hyperplane, we derive a simple, unsupervised, highly performant and computationally efficient method.
arXiv Detail & Related papers (2021-08-24T12:28:04Z) - iNNformant: Boundary Samples as Telltale Watermarks [68.8204255655161]
We show that it is possible to generate sets of boundary samples which can identify any of four tested microarchitectures.
These sets can be built to not contain any sample with a worse peak signal-to-noise ratio than 70dB.
arXiv Detail & Related papers (2021-06-14T11:18:32Z) - Adversarial Examples Detection with Bayesian Neural Network [57.185482121807716]
We propose a new framework to detect adversarial examples motivated by the observations that random components can improve the smoothness of predictors.
We propose a novel Bayesian adversarial example detector, short for BATer, to improve the performance of adversarial example detection.
arXiv Detail & Related papers (2021-05-18T15:51:24Z) - Toward Scalable and Unified Example-based Explanation and Outlier
Detection [128.23117182137418]
We argue for a broader adoption of prototype-based student networks capable of providing an example-based explanation for their prediction.
We show that our prototype-based networks beyond similarity kernels deliver meaningful explanations and promising outlier detection results without compromising classification accuracy.
arXiv Detail & Related papers (2020-11-11T05:58:17Z) - Extending machine learning classification capabilities with histogram
reweighting [0.0]
We propose the use of Monte Carlo histogram reweighting to extrapolate predictions of machine learning methods.
We treat the output from a convolutional neural network as an observable in a statistical system, enabling its extrapolation over continuous ranges in parameter space.
arXiv Detail & Related papers (2020-04-29T17:20:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.