gRoMA: a Tool for Measuring the Global Robustness of Deep Neural
Networks
- URL: http://arxiv.org/abs/2301.02288v3
- Date: Thu, 28 Dec 2023 07:01:45 GMT
- Title: gRoMA: a Tool for Measuring the Global Robustness of Deep Neural
Networks
- Authors: Natan Levy and Raz Yerushalmi and Guy Katz
- Abstract summary: Deep neural networks (DNNs) are at the forefront of cutting-edge technology, and have been achieving remarkable performance in a variety of complex tasks.
Their integration into safety-critical systems, such as in the aerospace or automotive domains, poses a significant challenge due to the threat of adversarial inputs.
Here, we present gRoMA, an innovative and scalable tool that implements a probabilistic approach to measure the global categorial robustness of a DNN.
- Score: 3.2228025627337864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) are at the forefront of cutting-edge technology,
and have been achieving remarkable performance in a variety of complex tasks.
Nevertheless, their integration into safety-critical systems, such as in the
aerospace or automotive domains, poses a significant challenge due to the
threat of adversarial inputs: perturbations in inputs that might cause the DNN
to make grievous mistakes. Multiple studies have demonstrated that even modern
DNNs are susceptible to adversarial inputs, and this risk must thus be measured
and mitigated to allow the deployment of DNNs in critical settings. Here, we
present gRoMA (global Robustness Measurement and Assessment), an innovative and
scalable tool that implements a probabilistic approach to measure the global
categorial robustness of a DNN. Specifically, gRoMA measures the probability of
encountering adversarial inputs for a specific output category. Our tool
operates on pre-trained, black-box classification DNNs, and generates input
samples belonging to an output category of interest. It measures the DNN's
susceptibility to adversarial inputs around these inputs, and aggregates the
results to infer the overall global categorial robustness of the DNN up to some
small bounded statistical error.
We evaluate our tool on the popular Densenet DNN model over the CIFAR10
dataset. Our results reveal significant gaps in the robustness of the different
output categories. This experiment demonstrates the usefulness and scalability
of our approach and its potential for allowing DNNs to be deployed within
critical systems of interest.
Related papers
- Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - A Survey of Graph Neural Networks in Real world: Imbalance, Noise,
Privacy and OOD Challenges [75.37448213291668]
This paper systematically reviews existing Graph Neural Networks (GNNs)
We first highlight the four key challenges faced by existing GNNs, paving the way for our exploration of real-world GNN models.
arXiv Detail & Related papers (2024-03-07T13:10:37Z) - Harnessing Neuron Stability to Improve DNN Verification [42.65507402735545]
We present VeriStable, a novel extension of recently proposed DPLL-based constraint DNN verification approach.
We evaluate the effectiveness of VeriStable across a range of challenging benchmarks including fully-connected feed networks (FNNs), convolutional neural networks (CNNs) and residual networks (ResNets)
Preliminary results show that VeriStable is competitive and outperforms state-of-the-art verification tools, including $alpha$-$beta$-CROWN and MN-BaB, the first and second performers of the VNN-COMP, respectively.
arXiv Detail & Related papers (2024-01-19T23:48:04Z) - Make Me a BNN: A Simple Strategy for Estimating Bayesian Uncertainty
from Pre-trained Models [40.38541033389344]
Deep Neural Networks (DNNs) are powerful tools for various computer vision tasks, yet they often struggle with reliable uncertainty quantification.
We introduce the Adaptable Bayesian Neural Network (ABNN), a simple and scalable strategy to seamlessly transform DNNs into BNNs.
We conduct extensive experiments across multiple datasets for image classification and semantic segmentation tasks, and our results demonstrate that ABNN achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-12-23T16:39:24Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Trustworthy Graph Neural Networks: Aspects, Methods and Trends [115.84291569988748]
Graph neural networks (GNNs) have emerged as competent graph learning methods for diverse real-world scenarios.
Performance-oriented GNNs have exhibited potential adverse effects like vulnerability to adversarial attacks.
To avoid these unintentional harms, it is necessary to build competent GNNs characterised by trustworthiness.
arXiv Detail & Related papers (2022-05-16T02:21:09Z) - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy,
Robustness, Fairness, and Explainability [59.80140875337769]
Graph Neural Networks (GNNs) have made rapid developments in the recent years.
GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data.
This paper gives a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability.
arXiv Detail & Related papers (2022-04-18T21:41:07Z) - Verification-Aided Deep Ensemble Selection [4.290931412096984]
Deep neural networks (DNNs) have become the technology of choice for realizing a variety of complex tasks.
Even an imperceptible perturbation to a correctly classified input can lead to misclassification by a DNN.
This paper devises a methodology for identifying ensemble compositions that are less prone to simultaneous errors.
arXiv Detail & Related papers (2022-02-08T14:36:29Z) - Understanding Local Robustness of Deep Neural Networks under Natural
Variations [18.638234554232994]
Deep Neural Networks (DNNs) are being deployed in a wide range of settings today.
Recent research has shown that DNNs can be brittle to even slight variations of the input data.
arXiv Detail & Related papers (2020-10-09T21:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.