Enhancing Deep Learning with Scenario-Based Override Rules: a Case Study
- URL: http://arxiv.org/abs/2301.08114v1
- Date: Thu, 19 Jan 2023 15:06:32 GMT
- Title: Enhancing Deep Learning with Scenario-Based Override Rules: a Case Study
- Authors: Adiel Ashrov and Guy Katz
- Abstract summary: Deep neural networks (DNNs) have become a crucial instrument in the software development toolkit.
DNNs are highly opaque, and can behave in an unexpected manner when they encounter unfamiliar input.
One promising approach is by extending DNN-based systems with hand-crafted override rules.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep neural networks (DNNs) have become a crucial instrument in the software
development toolkit, due to their ability to efficiently solve complex
problems. Nevertheless, DNNs are highly opaque, and can behave in an unexpected
manner when they encounter unfamiliar input. One promising approach for
addressing this challenge is by extending DNN-based systems with hand-crafted
override rules, which override the DNN's output when certain conditions are
met. Here, we advocate crafting such override rules using the well-studied
scenario-based modeling paradigm, which produces rules that are simple,
extensible, and powerful enough to ensure the safety of the DNN, while also
rendering the system more translucent. We report on two extensive case studies,
which demonstrate the feasibility of the approach; and through them, propose an
extension to scenario-based modeling, which facilitates its integration with
DNN components. We regard this work as a step towards creating safer and more
reliable DNN-based systems and models.
Related papers
- Make Me a BNN: A Simple Strategy for Estimating Bayesian Uncertainty
from Pre-trained Models [40.38541033389344]
Deep Neural Networks (DNNs) are powerful tools for various computer vision tasks, yet they often struggle with reliable uncertainty quantification.
We introduce the Adaptable Bayesian Neural Network (ABNN), a simple and scalable strategy to seamlessly transform DNNs into BNNs.
We conduct extensive experiments across multiple datasets for image classification and semantic segmentation tasks, and our results demonstrate that ABNN achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-12-23T16:39:24Z) - VNN: Verification-Friendly Neural Networks with Hard Robustness Guarantees [3.208888890455612]
We propose a novel framework to generate Verification-Friendly Neural Networks (VNNs)
We present a post-training optimization framework to achieve a balance between preserving prediction performance and verification-friendliness.
arXiv Detail & Related papers (2023-12-15T12:39:27Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - Assumption Generation for the Verification of Learning-Enabled
Autonomous Systems [7.580719272198119]
We present an assume-guarantee style compositional approach for the formal verification of system-level safety properties.
We illustrate our approach on a case study taken from the autonomous airplanes domain.
arXiv Detail & Related papers (2023-05-27T23:30:27Z) - Verifying Generalization in Deep Learning [3.4948705785954917]
Deep neural networks (DNNs) are the workhorses of deep learning.
DNNs are notoriously prone to poor generalization, i.e., may prove inadequate on inputs not encountered during training.
We propose a novel, verification-driven methodology for identifying DNN-based decision rules that generalize well to new input domains.
arXiv Detail & Related papers (2023-02-11T17:08:15Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - On the benefits of robust models in modulation recognition [53.391095789289736]
Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications.
In other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations.
We propose a novel framework to test the robustness of current state-of-the-art models.
arXiv Detail & Related papers (2021-03-27T19:58:06Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - CodNN -- Robust Neural Networks From Coded Classification [27.38642191854458]
Deep Neural Networks (DNNs) are a revolutionary force in the ongoing information revolution.
DNNs are highly sensitive to noise, whether adversarial or random.
This poses a fundamental challenge for hardware implementations of DNNs, and for their deployment in critical applications such as autonomous driving.
By our approach, either the data or internal layers of the DNN are coded with error correcting codes, and successful computation under noise is guaranteed.
arXiv Detail & Related papers (2020-04-22T17:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.