Developing and Defeating Adversarial Examples
- URL: http://arxiv.org/abs/2008.10106v1
- Date: Sun, 23 Aug 2020 21:00:33 GMT
- Title: Developing and Defeating Adversarial Examples
- Authors: Ian McDiarmid-Sterling and Allan Moser
- Abstract summary: Recent research has demonstrated that deep neural networks (DNNs) can be attacked through adversarial examples.
In this work we develop adversarial examples to attack the Yolo V3 object detector.
We then study strategies to detect and neutralize these examples.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Breakthroughs in machine learning have resulted in state-of-the-art deep
neural networks (DNNs) performing classification tasks in safety-critical
applications. Recent research has demonstrated that DNNs can be attacked
through adversarial examples, which are small perturbations to input data that
cause the DNN to misclassify objects. The proliferation of DNNs raises
important safety concerns about designing systems that are robust to
adversarial examples. In this work we develop adversarial examples to attack
the Yolo V3 object detector [1] and then study strategies to detect and
neutralize these examples. Python code for this project is available at
https://github.com/ianmcdiarmidsterling/adversarial
Related papers
- Not So Robust After All: Evaluating the Robustness of Deep Neural
Networks to Unseen Adversarial Attacks [5.024667090792856]
Deep neural networks (DNNs) have gained prominence in various applications, such as classification, recognition, and prediction.
A fundamental attribute of traditional DNNs is their vulnerability to modifications in input data, which has resulted in the investigation of adversarial attacks.
This study aims to challenge the efficacy and generalization of contemporary defense mechanisms against adversarial attacks.
arXiv Detail & Related papers (2023-08-12T05:21:34Z) - A Review of Adversarial Attack and Defense for Classification Methods [78.50824774203495]
This paper focuses on the generation and guarding of adversarial examples.
It is the hope of the authors that this paper will encourage more statisticians to work on this important and exciting field of generating and defending against adversarial examples.
arXiv Detail & Related papers (2021-11-18T22:13:43Z) - ADC: Adversarial attacks against object Detection that evade Context
consistency checks [55.8459119462263]
We show that even context consistency checks can be brittle to properly crafted adversarial examples.
We propose an adaptive framework to generate examples that subvert such defenses.
Our results suggest that how to robustly model context and check its consistency, is still an open problem.
arXiv Detail & Related papers (2021-10-24T00:25:09Z) - Understanding Adversarial Examples Through Deep Neural Network's
Response Surface and Uncertainty Regions [1.8047694351309205]
We study the root cause of DNN adversarial examples.
Existing attack algorithms can generate from a handful to a few hundred adversarial examples.
We show there are infinitely many adversarial images given one clean sample, all within a small neighborhood of the clean sample.
arXiv Detail & Related papers (2021-06-30T02:38:17Z) - Towards Adversarial-Resilient Deep Neural Networks for False Data
Injection Attack Detection in Power Grids [7.351477761427584]
False data injection attacks (FDIAs) pose a significant security threat to power system state estimation.
Recent studies have proposed machine learning (ML) techniques, particularly deep neural networks (DNNs)
arXiv Detail & Related papers (2021-02-17T22:26:34Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - On Intrinsic Dataset Properties for Adversarial Machine Learning [0.76146285961466]
We study the effect of intrinsic dataset properties on the performance of adversarial attack and defense methods.
We find that input size and image contrast play key roles in attack and defense success.
arXiv Detail & Related papers (2020-05-19T02:24:14Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.