Adversarial Attacks on ML Defense Models Competition
- URL: http://arxiv.org/abs/2110.08042v1
- Date: Fri, 15 Oct 2021 12:12:41 GMT
- Title: Adversarial Attacks on ML Defense Models Competition
- Authors: Yinpeng Dong, Qi-An Fu, Xiao Yang, Wenzhao Xiang, Tianyu Pang, Hang
Su, Jun Zhu, Jiayu Tang, Yuefeng Chen, XiaoFeng Mao, Yuan He, Hui Xue, Chao
Li, Ye Liu, Qilong Zhang, Lianli Gao, Yunrui Yu, Xitong Gao, Zhe Zhao, Daquan
Lin, Jiadong Lin, Chuanbiao Song, Zihao Wang, Zhennan Wu, Yang Guo, Jiequan
Cui, Xiaogang Xu, Pengguang Chen
- Abstract summary: The TSAIL group at Tsinghua University and the Alibaba Security group organized this competition.
The purpose of this competition is to motivate novel attack algorithms to evaluate adversarial robustness.
- Score: 82.37504118766452
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Due to the vulnerability of deep neural networks (DNNs) to adversarial
examples, a large number of defense techniques have been proposed to alleviate
this problem in recent years. However, the progress of building more robust
models is usually hampered by the incomplete or incorrect robustness
evaluation. To accelerate the research on reliable evaluation of adversarial
robustness of the current defense models in image classification, the TSAIL
group at Tsinghua University and the Alibaba Security group organized this
competition along with a CVPR 2021 workshop on adversarial machine learning
(https://aisecure-workshop.github.io/amlcvpr2021/). The purpose of this
competition is to motivate novel attack algorithms to evaluate adversarial
robustness more effectively and reliably. The participants were encouraged to
develop stronger white-box attack algorithms to find the worst-case robustness
of different defenses. This competition was conducted on an adversarial
robustness evaluation platform -- ARES (https://github.com/thu-ml/ares), and is
held on the TianChi platform
(https://tianchi.aliyun.com/competition/entrance/531847/introduction) as one of
the series of AI Security Challengers Program. After the competition, we
summarized the results and established a new adversarial robustness benchmark
at https://ml.cs.tsinghua.edu.cn/ares-bench/, which allows users to upload
adversarial attack algorithms and defense models for evaluation.
Related papers
- Perturbation-Invariant Adversarial Training for Neural Ranking Models:
Improving the Effectiveness-Robustness Trade-Off [107.35833747750446]
adversarial examples can be crafted by adding imperceptible perturbations to legitimate documents.
This vulnerability raises significant concerns about their reliability and hinders the widespread deployment of NRMs.
In this study, we establish theoretical guarantees regarding the effectiveness-robustness trade-off in NRMs.
arXiv Detail & Related papers (2023-12-16T05:38:39Z) - Adversarial Robustness Unhardening via Backdoor Attacks in Federated
Learning [13.12397828096428]
Adversarial Robustness Unhardening (ARU) is employed by a subset of adversaries to intentionally undermine model robustness during decentralized training.
We present empirical experiments evaluating ARU's impact on adversarial training and existing robust aggregation defenses against poisoning and backdoor attacks.
arXiv Detail & Related papers (2023-10-17T21:38:41Z) - Increasing Confidence in Adversarial Robustness Evaluations [53.2174171468716]
We propose a test to identify weak attacks and thus weak defense evaluations.
Our test slightly modifies a neural network to guarantee the existence of an adversarial example for every sample.
For eleven out of thirteen previously-published defenses, the original evaluation of the defense fails our test, while stronger attacks that break these defenses pass it.
arXiv Detail & Related papers (2022-06-28T13:28:13Z) - Based-CE white-box adversarial attack will not work using super-fitting [10.34121642283309]
Deep Neural Networks (DNN) are widely used in various fields due to their powerful performance.
Recent studies have shown that deep learning models are vulnerable to adversarial attacks.
This paper proposes a new defense method by using the model super-fitting status.
arXiv Detail & Related papers (2022-05-04T09:23:00Z) - An Overview of Backdoor Attacks Against Deep Neural Networks and
Possible Defences [33.415612094924654]
The goal of this paper is to review the different types of attacks and defences proposed so far.
In a backdoor attack, the attacker corrupts the training data so to induce an erroneous behaviour at test time.
Test time errors are activated only in the presence of a triggering event corresponding to a properly crafted input sample.
arXiv Detail & Related papers (2021-11-16T13:06:31Z) - Towards Evaluating the Robustness of Neural Networks Learned by
Transduction [44.189248766285345]
Greedy Model Space Attack (GMSA) is an attack framework that can serve as a new baseline for evaluating transductive-learning based defenses.
We show that GMSA, even with weak instantiations, can break previous transductive-learning based defenses.
arXiv Detail & Related papers (2021-10-27T19:39:50Z) - Unrestricted Adversarial Attacks on ImageNet Competition [70.8952435964555]
Unrestricted adversarial attack is popular and practical direction but has not been studied thoroughly.
We organize this competition with the purpose of exploring more effective unrestricted adversarial attack algorithm.
arXiv Detail & Related papers (2021-10-17T04:27:15Z) - Universal Adversarial Training with Class-Wise Perturbations [78.05383266222285]
adversarial training is the most widely used method for defending against adversarial attacks.
In this work, we find that a UAP does not attack all classes equally.
We improve the SOTA UAT by proposing to utilize class-wise UAPs during adversarial training.
arXiv Detail & Related papers (2021-04-07T09:05:49Z) - Reliable evaluation of adversarial robustness with an ensemble of
diverse parameter-free attacks [65.20660287833537]
In this paper we propose two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function.
We then combine our novel attacks with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness.
arXiv Detail & Related papers (2020-03-03T18:15:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.