Monitizer: Automating Design and Evaluation of Neural Network Monitors
- URL: http://arxiv.org/abs/2405.10350v1
- Date: Thu, 16 May 2024 13:19:51 GMT
- Title: Monitizer: Automating Design and Evaluation of Neural Network Monitors
- Authors: Muqsit Azeem, Marta Grobelna, Sudeep Kanav, Jan Kretinsky, Stefanie Mohr, Sabine Rieder,
- Abstract summary: The behavior of neural networks (NNs) on previously unseen types of data (out-of-distribution or OOD) is typically unpredictable.
This can be dangerous if the network's output is used for decision-making in a safety-critical system.
We present a tool for users and developers of NN monitors.
- Score: 0.9236074230806581
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The behavior of neural networks (NNs) on previously unseen types of data (out-of-distribution or OOD) is typically unpredictable. This can be dangerous if the network's output is used for decision-making in a safety-critical system. Hence, detecting that an input is OOD is crucial for the safe application of the NN. Verification approaches do not scale to practical NNs, making runtime monitoring more appealing for practical use. While various monitors have been suggested recently, their optimization for a given problem, as well as comparison with each other and reproduction of results, remain challenging. We present a tool for users and developers of NN monitors. It allows for (i) application of various types of monitors from the literature to a given input NN, (ii) optimization of the monitor's hyperparameters, and (iii) experimental evaluation and comparison to other approaches. Besides, it facilitates the development of new monitoring approaches. We demonstrate the tool's usability on several use cases of different types of users as well as on a case study comparing different approaches from recent literature.
Related papers
- Monitoring Robustness and Individual Fairness [7.922558880545528]
We propose runtime monitoring of input-output robustness of deployed, black-box AI models.<n>We show that the monitoring problem can be cast as the fixed-radius nearest neighbor (FRNN) search problem.<n>We present our tool Clemont, which offers a number of lightweight monitors.
arXiv Detail & Related papers (2025-05-31T10:27:54Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - An LSTM-Based Predictive Monitoring Method for Data with Time-varying
Variability [3.5246670856011035]
This paper explores the ability of the recurrent neural network structure to monitor processes.
It proposes a control chart based on long short-term memory (LSTM) prediction intervals for data with time-varying variability.
The proposed method is also applied to time series sensor data, which confirms that the proposed method is an effective technique for detecting abnormalities.
arXiv Detail & Related papers (2023-09-05T06:13:09Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Detection of out-of-distribution samples using binary neuron activation
patterns [0.26249027950824505]
The ability to identify previously unseen inputs as novel is crucial in safety-critical applications such as self-driving cars, unmanned aerial vehicles, and robots.
Existing approaches to detect OOD samples treat a DNN as a black box and evaluate the confidence score of the output predictions.
In this work, we introduce a novel method for OOD detection. Our method is motivated by theoretical analysis of neuron activation patterns (NAP) in ReLU-based architectures.
arXiv Detail & Related papers (2022-12-29T11:42:46Z) - Runtime Monitoring for Out-of-Distribution Detection in Object Detection
Neural Networks [0.0]
Monitoring provides a more realistic and applicable alternative to verification in the setting of real neural networks used in industry.
We extend a runtime-monitoring approach previously proposed for classification networks to perception systems capable of identification and localization of multiple objects.
arXiv Detail & Related papers (2022-12-15T12:50:42Z) - Out-Of-Distribution Detection Is Not All You Need [0.0]
We argue that OOD detection is not a well-suited framework to design efficient runtime monitors.
We show that studying monitors in the OOD setting can be misleading.
We also show that removing erroneous training data samples helps to train better monitors.
arXiv Detail & Related papers (2022-11-29T12:40:06Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Provably-Robust Runtime Monitoring of Neuron Activation Patterns [0.0]
It is desirable to monitor in operation time if the input for a deep neural network is similar to the data used in training.
We address this challenge by integrating formal symbolic reasoning inside the monitor construction process.
The provable robustness is further generalized to cases where monitoring a single neuron can use more than one bit.
arXiv Detail & Related papers (2020-11-24T08:37:18Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.