Adversarial Patch Attacks and Defences in Vision-Based Tasks: A Survey
- URL: http://arxiv.org/abs/2206.08304v1
- Date: Thu, 16 Jun 2022 17:06:47 GMT
- Title: Adversarial Patch Attacks and Defences in Vision-Based Tasks: A Survey
- Authors: Abhijith Sharma, Yijun Bian, Phil Munz, Apurva Narayan
- Abstract summary: Adversarial attacks in deep learning models, especially for safety-critical systems, are gaining more and more attention in recent years, due to the lack of trust in the security and robustness of AI models.
Yet the more primitive adversarial attacks might be physically infeasible or require some resources that are hard to access like the training data, which motivated the emergence of patch attacks.
In this survey, we provide a comprehensive overview to cover existing techniques of adversarial patch attacks, aiming to help interested researchers quickly catch up with the progress in this field.
- Score: 1.0323063834827415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks in deep learning models, especially for safety-critical
systems, are gaining more and more attention in recent years, due to the lack
of trust in the security and robustness of AI models. Yet the more primitive
adversarial attacks might be physically infeasible or require some resources
that are hard to access like the training data, which motivated the emergence
of patch attacks. In this survey, we provide a comprehensive overview to cover
existing techniques of adversarial patch attacks, aiming to help interested
researchers quickly catch up with the progress in this field. We also discuss
existing techniques for developing detection and defences against adversarial
patches, aiming to help the community better understand this field and its
applications in the real world.
Related papers
- Adversarial Attacks of Vision Tasks in the Past 10 Years: A Survey [21.4046846701173]
Adversarial attacks pose significant security threats during machine learning inference.
Existing reviews often focus on attack classifications and lack comprehensive, in-depth analysis.
This article addresses these gaps by offering a thorough summary of traditional and LVLM adversarial attacks.
arXiv Detail & Related papers (2024-10-31T07:22:51Z) - Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI [52.138044013005]
generative AI, particularly large language models (LLMs), become increasingly integrated into production applications.
New attack surfaces and vulnerabilities emerge and put a focus on adversarial threats in natural language and multi-modal systems.
Red-teaming has gained importance in proactively identifying weaknesses in these systems, while blue-teaming works to protect against such adversarial attacks.
This work aims to bridge the gap between academic insights and practical security measures for the protection of generative AI systems.
arXiv Detail & Related papers (2024-09-23T10:18:10Z) - Exploring Vulnerabilities and Protections in Large Language Models: A Survey [1.6179784294541053]
This survey examines the security challenges of Large Language Models (LLMs)
It focuses on two main areas: Prompt Hacking and Adversarial Attacks.
By detailing these security issues, the survey contributes to the broader discussion on creating resilient AI systems.
arXiv Detail & Related papers (2024-06-01T00:11:09Z) - A Comprehensive Survey of Attack Techniques, Implementation, and Mitigation Strategies in Large Language Models [0.0]
This article explores two attack categories: attacks on models themselves and attacks on model applications.
The former requires expertise, access to model data, and significant implementation time.
The latter is more accessible to attackers and has seen increased attention.
arXiv Detail & Related papers (2023-12-18T07:07:32Z) - A Survey of Robustness and Safety of 2D and 3D Deep Learning Models
Against Adversarial Attacks [22.054275309336]
Deep learning models are not trustworthy enough because of their limited robustness against adversarial attacks.
We first construct a general threat model from different perspectives and then comprehensively review the latest progress of both 2D and 3D adversarial attacks.
We are the first to systematically investigate adversarial attacks for 3D models, a flourishing field applied to many real-world applications.
arXiv Detail & Related papers (2023-10-01T10:16:33Z) - Physical Adversarial Attacks for Surveillance: A Survey [40.81031907691243]
This paper reviews recent attempts and findings in learning and designing physical adversarial attacks for surveillance applications.
In particular, we propose a framework to analyze physical adversarial attacks and provide a comprehensive survey of physical adversarial attacks on four key surveillance tasks.
The insights in this paper present an important step in building resilience within surveillance systems to physical adversarial attacks.
arXiv Detail & Related papers (2023-05-01T20:19:59Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Certified Defenses for Adversarial Patches [72.65524549598126]
Adversarial patch attacks are among the most practical threat models against real-world computer vision systems.
This paper studies certified and empirical defenses against patch attacks.
arXiv Detail & Related papers (2020-03-14T19:57:31Z) - Challenges and Countermeasures for Adversarial Attacks on Deep
Reinforcement Learning [48.49658986576776]
Deep Reinforcement Learning (DRL) has numerous applications in the real world thanks to its outstanding ability in adapting to the surrounding environments.
Despite its great advantages, DRL is susceptible to adversarial attacks, which precludes its use in real-life critical systems and applications.
This paper presents emerging attacks in DRL-based systems and the potential countermeasures to defend against these attacks.
arXiv Detail & Related papers (2020-01-27T10:53:11Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.