Benchmarking Adversarial Robustness
- URL: http://arxiv.org/abs/1912.11852v1
- Date: Thu, 26 Dec 2019 12:37:01 GMT
- Title: Benchmarking Adversarial Robustness
- Authors: Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao,
Jun Zhu
- Abstract summary: We establish a comprehensive, rigorous, and coherent benchmark to evaluate adversarial robustness on image classification tasks.
Based on the evaluation results, we draw several important findings and provide insights for future research.
- Score: 47.168521143464545
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks are vulnerable to adversarial examples, which becomes
one of the most important research problems in the development of deep
learning. While a lot of efforts have been made in recent years, it is of great
significance to perform correct and complete evaluations of the adversarial
attack and defense algorithms. In this paper, we establish a comprehensive,
rigorous, and coherent benchmark to evaluate adversarial robustness on image
classification tasks. After briefly reviewing plenty of representative attack
and defense methods, we perform large-scale experiments with two robustness
curves as the fair-minded evaluation criteria to fully understand the
performance of these methods. Based on the evaluation results, we draw several
important findings and provide insights for future research.
Related papers
- A Survey of Neural Network Robustness Assessment in Image Recognition [4.581878177334397]
In recent years, there has been significant attention given to the robustness assessment of neural networks.
Deep learning's robustness problem is particularly significant, highlighted by the discovery of adversarial attacks on image classification models.
In this survey, we present a detailed examination of both adversarial robustness (AR) and corruption robustness (CR) in neural network assessment.
arXiv Detail & Related papers (2024-04-12T07:19:16Z) - Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - A Comprehensive Study on Robustness of Image Classification Models:
Benchmarking and Rethinking [54.89987482509155]
robustness of deep neural networks is usually lacking under adversarial examples, common corruptions, and distribution shifts.
We establish a comprehensive benchmark robustness called textbfARES-Bench on the image classification task.
By designing the training settings accordingly, we achieve the new state-of-the-art adversarial robustness.
arXiv Detail & Related papers (2023-02-28T04:26:20Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - A Comprehensive Evaluation Framework for Deep Model Robustness [44.20580847861682]
Deep neural networks (DNNs) have achieved remarkable performance across a wide area of applications.
They are vulnerable to adversarial examples, which motivates the adversarial defense.
This paper presents a model evaluation framework containing a comprehensive, rigorous, and coherent set of evaluation metrics.
arXiv Detail & Related papers (2021-01-24T01:04:25Z) - SoK: Certified Robustness for Deep Neural Networks [13.10665264010575]
Recent studies have shown that deep neural networks (DNNs) are vulnerable to adversarial attacks.
In this paper, we systematize certifiably robust approaches and related practical and theoretical implications.
We also provide the first comprehensive benchmark on existing robustness verification and training approaches on different datasets.
arXiv Detail & Related papers (2020-09-09T07:00:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.