Runtime Monitoring DNN-Based Perception
- URL: http://arxiv.org/abs/2310.03999v1
- Date: Fri, 6 Oct 2023 03:57:56 GMT
- Title: Runtime Monitoring DNN-Based Perception
- Authors: Chih-Hong Cheng, Michael Luttenberger, Rongjie Yan
- Abstract summary: This tutorial aims to provide readers with a glimpse of techniques proposed in the literature.
We start with classical methods proposed in the machine learning community, then highlight a few techniques proposed by the formal methods community.
We conclude by highlighting the need to rigorously design monitors, where data availability outside the operational domain plays an important role.
- Score: 5.518665721709856
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) are instrumental in realizing complex perception
systems. As many of these applications are safety-critical by design,
engineering rigor is required to ensure that the functional insufficiency of
the DNN-based perception is not the source of harm. In addition to conventional
static verification and testing techniques employed during the design phase,
there is a need for runtime verification techniques that can detect critical
events, diagnose issues, and even enforce requirements. This tutorial aims to
provide readers with a glimpse of techniques proposed in the literature. We
start with classical methods proposed in the machine learning community, then
highlight a few techniques proposed by the formal methods community. While we
surely can observe similarities in the design of monitors, how the decision
boundaries are created vary between the two communities. We conclude by
highlighting the need to rigorously design monitors, where data availability
outside the operational domain plays an important role.
Related papers
- Safety Monitoring of Machine Learning Perception Functions: a Survey [7.193217430660011]
New dependability challenges arise when Machine Learning predictions are used in safety-critical applications.
The use of fault tolerance mechanisms, such as safety monitors, is essential to ensure the safe behavior of the system.
This paper presents an extensive literature review on safety monitoring of perception functions using ML in a safety-critical context.
arXiv Detail & Related papers (2024-12-09T10:58:50Z) - Verifying the Generalization of Deep Learning to Out-of-Distribution Domains [1.5774380628229037]
Deep neural networks (DNNs) play a crucial role in the field of machine learning.
DNNs may occasionally exhibit challenges with generalization, i.e., may fail to handle inputs that were not encountered during training.
This limitation is a significant challenge when it comes to deploying deep learning for safety-critical tasks.
arXiv Detail & Related papers (2024-06-04T07:02:59Z) - Real-time Threat Detection Strategies for Resource-constrained Devices [1.4815508281465273]
We present an end-to-end process designed to effectively address DNS-tunneling attacks in a router.
We demonstrate that utilizing stateless features for training the ML model, along with features chosen to be independent of the network configuration, leads to highly accurate results.
The deployment of this carefully crafted model, optimized for embedded devices across diverse environments, resulted in high DNS-tunneling attack detection with minimal latency.
arXiv Detail & Related papers (2024-03-22T10:02:54Z) - Towards Rigorous Design of OoD Detectors [0.0]
Out-of-distribution (OoD) detection techniques are instrumental for safety-related neural networks.
Current performance-oriented OoD detection techniques geared towards matching metrics are not sufficient for establishing safety claims.
What is missing is a rigorous design approach for developing, verifying, and validating OoD detectors.
arXiv Detail & Related papers (2023-06-14T11:38:36Z) - Prescriptive Process Monitoring: Quo Vadis? [64.39761523935613]
The paper studies existing methods in this field via a Systematic Literature Review ( SLR)
The SLR provides insights into challenges and areas for future research that could enhance the usefulness and applicability of prescriptive process monitoring methods.
arXiv Detail & Related papers (2021-12-03T08:06:24Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Intrusion Detection Systems for IoT: opportunities and challenges
offered by Edge Computing [1.7589792057098648]
Key components of current cybersecurity methods are the Intrusion Detection Systems (IDSs)
IDSs can be based either on cross-checking monitored events with a database of known intrusion experiences, known as signature-based, or on learning the normal behavior of the system.
This work is dedicated to the application to the Internet of Things (IoT) network where edge computing is used to support the IDS implementation.
arXiv Detail & Related papers (2020-12-02T13:07:27Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z) - Survey of Network Intrusion Detection Methods from the Perspective of
the Knowledge Discovery in Databases Process [63.75363908696257]
We review the methods that have been applied to network data with the purpose of developing an intrusion detector.
We discuss the techniques used for the capture, preparation and transformation of the data, as well as, the data mining and evaluation methods.
As a result of this literature review, we investigate some open issues which will need to be considered for further research in the area of network security.
arXiv Detail & Related papers (2020-01-27T11:21:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.