Unrestricted Adversarial Attacks on ImageNet Competition
- URL: http://arxiv.org/abs/2110.09903v1
- Date: Sun, 17 Oct 2021 04:27:15 GMT
- Title: Unrestricted Adversarial Attacks on ImageNet Competition
- Authors: Yuefeng Chen, Xiaofeng Mao, Yuan He, Hui Xue, Chao Li, Yinpeng Dong,
Qi-An Fu, Xiao Yang, Wenzhao Xiang, Tianyu Pang, Hang Su, Jun Zhu, Fangcheng
Liu, Chao Zhang, Hongyang Zhang, Yichi Zhang, Shilong Liu, Chang Liu, Wenzhao
Xiang, Yajie Wang, Huipeng Zhou, Haoran Lyu, Yidan Xu, Zixuan Xu, Taoyu Zhu,
Wenjun Li, Xianfeng Gao, Guoqiu Wang, Huanqian Yan, Ying Guo, Chaoning Zhang,
Zheng Fang, Yang Wang, Bingyang Fu, Yunfei Zheng, Yekui Wang, Haorong Luo and
Zhen Yang
- Abstract summary: Unrestricted adversarial attack is popular and practical direction but has not been studied thoroughly.
We organize this competition with the purpose of exploring more effective unrestricted adversarial attack algorithm.
- Score: 70.8952435964555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many works have investigated the adversarial attacks or defenses under the
settings where a bounded and imperceptible perturbation can be added to the
input. However in the real-world, the attacker does not need to comply with
this restriction. In fact, more threats to the deep model come from
unrestricted adversarial examples, that is, the attacker makes large and
visible modifications on the image, which causes the model classifying
mistakenly, but does not affect the normal observation in human perspective.
Unrestricted adversarial attack is a popular and practical direction but has
not been studied thoroughly. We organize this competition with the purpose of
exploring more effective unrestricted adversarial attack algorithm, so as to
accelerate the academical research on the model robustness under stronger
unbounded attacks. The competition is held on the TianChi platform
(\url{https://tianchi.aliyun.com/competition/entrance/531853/introduction}) as
one of the series of AI Security Challengers Program.
Related papers
- Improving behavior based authentication against adversarial attack using XAI [3.340314613771868]
We propose an eXplainable AI (XAI) based defense strategy against adversarial attacks in such scenarios.
A feature selector, trained with our method, can be used as a filter in front of the original authenticator.
We demonstrate that our XAI based defense strategy is effective against adversarial attacks and outperforms other defense strategies.
arXiv Detail & Related papers (2024-02-26T09:29:05Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z) - Adversarial Attacks on ML Defense Models Competition [82.37504118766452]
The TSAIL group at Tsinghua University and the Alibaba Security group organized this competition.
The purpose of this competition is to motivate novel attack algorithms to evaluate adversarial robustness.
arXiv Detail & Related papers (2021-10-15T12:12:41Z) - Widen The Backdoor To Let More Attackers In [24.540853975732922]
We investigate the scenario of a multi-agent backdoor attack, where multiple non-colluding attackers craft and insert triggered samples in a shared dataset.
We discover a clear backfiring phenomenon: increasing the number of attackers shrinks each attacker's attack success rate.
We then exploit this phenomenon to minimize the collective ASR of attackers and maximize defender's robustness accuracy.
arXiv Detail & Related papers (2021-10-09T13:53:57Z) - On Success and Simplicity: A Second Look at Transferable Targeted
Attacks [6.276791657895803]
We show that transferable targeted attacks converge slowly to optimal transferability and improve considerably when given more iterations.
An attack that simply maximizes the target logit performs surprisingly well, surpassing more complex losses and even achieving performance comparable to the state of the art.
arXiv Detail & Related papers (2020-12-21T09:41:29Z) - MultAV: Multiplicative Adversarial Videos [71.94264837503135]
We propose a novel attack method against video recognition models, Multiplicative Adversarial Videos (MultAV)
MultAV imposes perturbation on video data by multiplication.
Experimental results show that the model adversarially trained against additive attack is less robust to MultAV.
arXiv Detail & Related papers (2020-09-17T04:34:39Z) - AdvMind: Inferring Adversary Intent of Black-Box Attacks [66.19339307119232]
We present AdvMind, a new class of estimation models that infer the adversary intent of black-box adversarial attacks in a robust manner.
On average AdvMind detects the adversary intent with over 75% accuracy after observing less than 3 query batches.
arXiv Detail & Related papers (2020-06-16T22:04:31Z) - Deflecting Adversarial Attacks [94.85315681223702]
We present a new approach towards ending this cycle where we "deflect" adversarial attacks by causing the attacker to produce an input that resembles the attack's target class.
We first propose a stronger defense based on Capsule Networks that combines three detection mechanisms to achieve state-of-the-art detection performance.
arXiv Detail & Related papers (2020-02-18T06:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.