Increasing the Confidence of Deep Neural Networks by Coverage Analysis
- URL: http://arxiv.org/abs/2101.12100v1
- Date: Thu, 28 Jan 2021 16:38:26 GMT
- Title: Increasing the Confidence of Deep Neural Networks by Coverage Analysis
- Authors: Giulio Rossolini, Alessandro Biondi, Giorgio Carlo Buttazzo
- Abstract summary: This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
- Score: 71.57324258813674
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The great performance of machine learning algorithms and deep neural networks
in several perception and control tasks is pushing the industry to adopt such
technologies in safety-critical applications, as autonomous robots and
self-driving vehicles. At present, however, several issues need to be solved to
make deep learning methods more trustworthy, predictable, safe, and secure
against adversarial attacks. Although several methods have been proposed to
improve the trustworthiness of deep neural networks, most of them are tailored
for specific classes of adversarial examples, hence failing to detect other
corner cases or unsafe inputs that heavily deviate from the training samples.
This paper presents a lightweight monitoring architecture based on coverage
paradigms to enhance the model robustness against different unsafe inputs. In
particular, four coverage analysis methods are proposed and tested in the
architecture for evaluating multiple detection logics. Experimental results
show that the proposed approach is effective in detecting both powerful
adversarial examples and out-of-distribution inputs, introducing limited
extra-execution time and memory requirements.
Related papers
- Detecting Adversarial Attacks in Semantic Segmentation via Uncertainty Estimation: A Deep Analysis [12.133306321357999]
We propose an uncertainty-based method for detecting adversarial attacks on neural networks for semantic segmentation.
We conduct a detailed analysis of uncertainty-based detection of adversarial attacks and various state-of-the-art neural networks.
Our numerical experiments show the effectiveness of the proposed uncertainty-based detection method.
arXiv Detail & Related papers (2024-08-19T14:13:30Z) - Advancing Security in AI Systems: A Novel Approach to Detecting
Backdoors in Deep Neural Networks [3.489779105594534]
backdoors can be exploited by malicious actors on deep neural networks (DNNs) and cloud services for data processing.
Our approach leverages advanced tensor decomposition algorithms to meticulously analyze the weights of pre-trained DNNs and distinguish between backdoored and clean models.
This advancement enhances the security of deep learning and AI in networked systems, providing essential cybersecurity against evolving threats in emerging technologies.
arXiv Detail & Related papers (2024-03-13T03:10:11Z) - Adversarial Robustness of Deep Neural Networks: A Survey from a Formal
Verification Perspective [7.821877331499578]
Adversarial robustness, which concerns the reliability of a neural network when dealing with maliciously manipulated inputs, is one of the hottest topics in security and machine learning.
We survey existing literature in adversarial robustness verification for neural networks and collect 39 diversified research works across machine learning, security, and software engineering domains.
We provide a taxonomy from a formal verification perspective for a comprehensive understanding of this topic.
arXiv Detail & Related papers (2022-06-24T11:53:12Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Searching for an Effective Defender: Benchmarking Defense against
Adversarial Word Substitution [83.84968082791444]
Deep neural networks are vulnerable to intentionally crafted adversarial examples.
Various methods have been proposed to defend against adversarial word-substitution attacks for neural NLP models.
arXiv Detail & Related papers (2021-08-29T08:11:36Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z) - A cognitive based Intrusion detection system [0.0]
Intrusion detection is one of the important mechanisms that provide computer networks security.
This paper proposes a new approach based on Deep Neural Network ans Support vector machine classifier.
The proposed model predicts the attacks with better accuracy for intrusion detection rather similar methods.
arXiv Detail & Related papers (2020-05-19T13:30:30Z) - Certifiable Robustness to Adversarial State Uncertainty in Deep
Reinforcement Learning [40.989393438716476]
Deep Neural Network-based systems are now the state-of-the-art in many robotics tasks, but their application in safety-critical domains remains dangerous without formal guarantees on network robustness.
Small perturbations to sensor inputs are often enough to change network-based decisions, which was recently shown to cause an autonomous vehicle to swerve into another lane.
This work leverages research on certified adversarial robustness to develop an online certifiably robust for deep reinforcement learning algorithms.
arXiv Detail & Related papers (2020-04-11T21:36:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.