Evaluating the Vulnerabilities in ML systems in terms of adversarial
attacks
- URL: http://arxiv.org/abs/2308.12918v1
- Date: Thu, 24 Aug 2023 16:46:01 GMT
- Title: Evaluating the Vulnerabilities in ML systems in terms of adversarial
attacks
- Authors: John Harshith, Mantej Singh Gill, Madhan Jothimani
- Abstract summary: New adversarial attacks methods may pose challenges to current deep learning cyber defense systems.
Authors explore the consequences of vulnerabilities in AI systems.
It is important to train the AI systems appropriately when they are in testing phase and getting them ready for broader use.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There have been recent adversarial attacks that are difficult to find. These
new adversarial attacks methods may pose challenges to current deep learning
cyber defense systems and could influence the future defense of cyberattacks.
The authors focus on this domain in this research paper. They explore the
consequences of vulnerabilities in AI systems. This includes discussing how
they might arise, differences between randomized and adversarial examples and
also potential ethical implications of vulnerabilities. Moreover, it is
important to train the AI systems appropriately when they are in testing phase
and getting them ready for broader use.
Related papers
- A Review of the Duality of Adversarial Learning in Network Intrusion: Attacks and Countermeasures [0.0]
Adversarial attacks, particularly those targeting vulnerabilities in deep learning models, present a nuanced and substantial threat to cybersecurity.
Our study delves into adversarial learning threats such as Data Poisoning, Test Time Evasion, and Reverse Engineering.
Our research lays the groundwork for strengthening defense mechanisms to address the potential breaches in network security and privacy posed by adversarial attacks.
arXiv Detail & Related papers (2024-12-18T14:21:46Z) - A Comprehensive Review of Adversarial Attacks on Machine Learning [0.5104264623877593]
This research provides a comprehensive overview of adversarial attacks on AI and ML models, exploring various attack types, techniques, and their potential harms.
To gain practical insights, we employ the Adversarial Robustness Toolbox (ART) library to simulate these attacks on real-world use cases, such as self-driving cars.
arXiv Detail & Related papers (2024-12-16T02:27:54Z) - Considerations Influencing Offense-Defense Dynamics From Artificial Intelligence [0.0]
AI can enhance defensive capabilities but also presents avenues for malicious exploitation and large-scale societal harm.
This paper proposes a taxonomy to map and examine the key factors that influence whether AI systems predominantly pose threats or offer protective benefits to society.
arXiv Detail & Related papers (2024-12-05T10:05:53Z) - Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI [52.138044013005]
generative AI, particularly large language models (LLMs), become increasingly integrated into production applications.
New attack surfaces and vulnerabilities emerge and put a focus on adversarial threats in natural language and multi-modal systems.
Red-teaming has gained importance in proactively identifying weaknesses in these systems, while blue-teaming works to protect against such adversarial attacks.
This work aims to bridge the gap between academic insights and practical security measures for the protection of generative AI systems.
arXiv Detail & Related papers (2024-09-23T10:18:10Z) - Counter Denial of Service for Next-Generation Networks within the Artificial Intelligence and Post-Quantum Era [2.156208381257605]
DoS attacks are becoming increasingly sophisticated and easily executable.
State-of-the-art systematization efforts have limitations such as isolated DoS countermeasures.
The emergence of quantum computers is a game changer for DoS from attack and defense perspectives.
arXiv Detail & Related papers (2024-08-08T18:47:31Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.