A Comprehensive Review of Adversarial Attacks on Machine Learning
- URL: http://arxiv.org/abs/2412.11384v1
- Date: Mon, 16 Dec 2024 02:27:54 GMT
- Title: A Comprehensive Review of Adversarial Attacks on Machine Learning
- Authors: Syed Quiser Ahmed, Bharathi Vokkaliga Ganesh, Sathyanarayana Sampath Kumar, Prakhar Mishra, Ravi Anand, Bhanuteja Akurathi,
- Abstract summary: This research provides a comprehensive overview of adversarial attacks on AI and ML models, exploring various attack types, techniques, and their potential harms.
To gain practical insights, we employ the Adversarial Robustness Toolbox (ART) library to simulate these attacks on real-world use cases, such as self-driving cars.
- Score: 0.5104264623877593
- License:
- Abstract: This research provides a comprehensive overview of adversarial attacks on AI and ML models, exploring various attack types, techniques, and their potential harms. We also delve into the business implications, mitigation strategies, and future research directions. To gain practical insights, we employ the Adversarial Robustness Toolbox (ART) [1] library to simulate these attacks on real-world use cases, such as self-driving cars. Our goal is to inform practitioners and researchers about the challenges and opportunities in defending AI systems against adversarial threats. By providing a comprehensive comparison of different attack methods, we aim to contribute to the development of more robust and secure AI systems.
Related papers
- A Survey on Adversarial Robustness of LiDAR-based Machine Learning Perception in Autonomous Vehicles [0.0]
This survey focuses on the intersection of Adversarial Machine Learning (AML) and autonomous systems.
We comprehensively explore the threat landscape, encompassing cyber-attacks on sensors and adversarial perturbations.
This paper endeavors to present a concise overview of the challenges and advances in securing autonomous driving systems against adversarial threats.
arXiv Detail & Related papers (2024-11-21T01:26:52Z) - A Survey on Offensive AI Within Cybersecurity [1.8206461789819075]
This survey paper on offensive AI will comprehensively cover various aspects related to attacks against and using AI systems.
It will delve into the impact of offensive AI practices on different domains, including consumer, enterprise, and public digital infrastructure.
The paper will explore adversarial machine learning, attacks against AI models, infrastructure, and interfaces, along with offensive techniques like information gathering, social engineering, and weaponized AI.
arXiv Detail & Related papers (2024-09-26T17:36:22Z) - Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI [52.138044013005]
generative AI, particularly large language models (LLMs), become increasingly integrated into production applications.
New attack surfaces and vulnerabilities emerge and put a focus on adversarial threats in natural language and multi-modal systems.
Red-teaming has gained importance in proactively identifying weaknesses in these systems, while blue-teaming works to protect against such adversarial attacks.
This work aims to bridge the gap between academic insights and practical security measures for the protection of generative AI systems.
arXiv Detail & Related papers (2024-09-23T10:18:10Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - A Novel Approach to Guard from Adversarial Attacks using Stable Diffusion [0.0]
Our proposal suggests a different approach to the AI Guardian framework.
Instead of including adversarial examples in the training process, we propose training the AI system without them.
This aims to create a system that is inherently resilient to a wider range of attacks.
arXiv Detail & Related papers (2024-05-03T04:08:15Z) - Towards more Practical Threat Models in Artificial Intelligence Security [66.67624011455423]
Recent works have identified a gap between research and practice in artificial intelligence security.
We revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice.
arXiv Detail & Related papers (2023-11-16T16:09:44Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - Deep Learning-Based Autonomous Driving Systems: A Survey of Attacks and
Defenses [13.161104978510943]
This survey provides a thorough analysis of different attacks that may jeopardize autonomous driving systems.
It covers adversarial attacks for various deep learning models and attacks in both physical and cyber context.
Some promising research directions are suggested in order to improve deep learning-based autonomous driving safety.
arXiv Detail & Related papers (2021-04-05T06:31:47Z) - Security and Privacy for Artificial Intelligence: Opportunities and
Challenges [11.368470074697747]
In recent years, most AI models are vulnerable to advanced and sophisticated hacking techniques.
This challenge has motivated concerted research efforts into adversarial AI.
We present a holistic cyber security review that demonstrates adversarial attacks against AI applications.
arXiv Detail & Related papers (2021-02-09T06:06:13Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z) - Challenges and Countermeasures for Adversarial Attacks on Deep
Reinforcement Learning [48.49658986576776]
Deep Reinforcement Learning (DRL) has numerous applications in the real world thanks to its outstanding ability in adapting to the surrounding environments.
Despite its great advantages, DRL is susceptible to adversarial attacks, which precludes its use in real-life critical systems and applications.
This paper presents emerging attacks in DRL-based systems and the potential countermeasures to defend against these attacks.
arXiv Detail & Related papers (2020-01-27T10:53:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.