DelBugV: Delta-Debugging Neural Network Verifiers
- URL: http://arxiv.org/abs/2305.18558v1
- Date: Mon, 29 May 2023 18:42:03 GMT
- Title: DelBugV: Delta-Debugging Neural Network Verifiers
- Authors: Raya Elsaleh and Guy Katz
- Abstract summary: Deep neural networks (DNNs) are becoming a key component in diverse systems across the board.
Despite their success, they often err miserably; and this has triggered significant interest in formally verifying them.
Here, we present a novel tool, named DelBugV, that uses automated delta debug techniques on DNN verifiers.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) are becoming a key component in diverse systems
across the board. However, despite their success, they often err miserably; and
this has triggered significant interest in formally verifying them.
Unfortunately, DNN verifiers are intricate tools, and are themselves
susceptible to soundness bugs. Due to the complexity of DNN verifiers, as well
as the sizes of the DNNs being verified, debugging such errors is a daunting
task. Here, we present a novel tool, named DelBugV, that uses automated delta
debugging techniques on DNN verifiers. Given a malfunctioning DNN verifier and
a correct verifier as a point of reference (or, in some cases, just a single,
malfunctioning verifier), DelBugV can produce much simpler DNN verification
instances that still trigger undesired behavior -- greatly facilitating the
task of debugging the faulty verifier. Our tool is modular and extensible, and
can easily be enhanced with additional network simplification methods and
strategies. For evaluation purposes, we ran DelBugV on 4 DNN verification
engines, which were observed to produce incorrect results at the 2021 neural
network verification competition (VNN-COMP'21). We were able to simplify many
of the verification queries that trigger these faulty behaviors, by as much as
99%. We regard our work as a step towards the ultimate goal of producing
reliable and trustworthy DNN-based software.
Related papers
- ELEGANT: Certified Defense on the Fairness of Graph Neural Networks [94.10433608311604]
Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks.
malicious attackers could easily corrupt the fairness level of their predictions by adding perturbations to the input graph data.
We propose a principled framework named ELEGANT to study a novel problem of certifiable defense on the fairness level of GNNs.
arXiv Detail & Related papers (2023-11-05T20:29:40Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Neural Network Verification with Proof Production [7.898605407936655]
We present a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities.
Our proof production is based on an efficient adaptation of the well-known Farkas' lemma.
Our evaluation on a safety-critical system for airborne collision avoidance shows that proof production succeeds in almost all cases.
arXiv Detail & Related papers (2022-06-01T14:14:37Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Minimal Multi-Layer Modifications of Deep Neural Networks [0.0]
We present a new tool, called 3M-DNN, for repairing a given Deep Neural Network (DNN)
3M-DNN computes a modification to the network's weights that corrects its behavior, and attempts to minimize this change via a sequence of calls to a backend verification engine.
To the best of our knowledge, our method is the first one that allows repairing the network by simultaneously modifying multiple layers.
arXiv Detail & Related papers (2021-10-18T10:20:27Z) - HufuNet: Embedding the Left Piece as Watermark and Keeping the Right
Piece for Ownership Verification in Deep Neural Networks [16.388046449021466]
We propose a novel solution for watermarking deep neural networks (DNNs)
HufuNet is highly robust against model fine-tuning/pruning, kernels cutoff/supplement, functionality-equivalent attack, and fraudulent ownership claims.
arXiv Detail & Related papers (2021-03-25T06:55:22Z) - Continuous Safety Verification of Neural Networks [1.7056768055368385]
This paper considers approaches to transfer results established in the previous DNN safety verification problem to a modified problem setting.
The overall concept is evaluated in a $1/10$-scaled vehicle that equips a DNN controller to determine the visual waypoint from the perceived image.
arXiv Detail & Related papers (2020-10-12T13:28:04Z) - Enhancing Graph Neural Network-based Fraud Detectors against Camouflaged
Fraudsters [78.53851936180348]
We introduce two types of camouflages based on recent empirical studies, i.e., the feature camouflage and the relation camouflage.
Existing GNNs have not addressed these two camouflages, which results in their poor performance in fraud detection problems.
We propose a new model named CAmouflage-REsistant GNN (CARE-GNN) to enhance the GNN aggregation process with three unique modules against camouflages.
arXiv Detail & Related papers (2020-08-19T22:33:12Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - CodNN -- Robust Neural Networks From Coded Classification [27.38642191854458]
Deep Neural Networks (DNNs) are a revolutionary force in the ongoing information revolution.
DNNs are highly sensitive to noise, whether adversarial or random.
This poses a fundamental challenge for hardware implementations of DNNs, and for their deployment in critical applications such as autonomous driving.
By our approach, either the data or internal layers of the DNN are coded with error correcting codes, and successful computation under noise is guaranteed.
arXiv Detail & Related papers (2020-04-22T17:07:15Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.