Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain
- URL: http://arxiv.org/abs/2007.02407v3
- Date: Sat, 13 Mar 2021 19:31:59 GMT
- Title: Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain
- Authors: Ihai Rosenberg and Asaf Shabtai and Yuval Elovici and Lior Rokach
- Abstract summary: This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
- Score: 58.30296637276011
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years machine learning algorithms, and more specifically deep
learning algorithms, have been widely used in many fields, including cyber
security. However, machine learning systems are vulnerable to adversarial
attacks, and this limits the application of machine learning, especially in
non-stationary, adversarial environments, such as the cyber security domain,
where actual adversaries (e.g., malware developers) exist. This paper
comprehensively summarizes the latest research on adversarial attacks against
security solutions based on machine learning techniques and illuminates the
risks they pose. First, the adversarial attack methods are characterized based
on their stage of occurrence, and the attacker's goals and capabilities. Then,
we categorize the applications of adversarial attack and defense methods in the
cyber security domain. Finally, we highlight some characteristics identified in
recent research and discuss the impact of recent advancements in other
adversarial learning domains on future research directions in the cyber
security domain. This paper is the first to discuss the unique challenges of
implementing end-to-end adversarial attacks in the cyber security domain, map
them in a unified taxonomy, and use the taxonomy to highlight future research
directions.
Related papers
- Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI [52.138044013005]
generative AI, particularly large language models (LLMs), become increasingly integrated into production applications.
New attack surfaces and vulnerabilities emerge and put a focus on adversarial threats in natural language and multi-modal systems.
Red-teaming has gained importance in proactively identifying weaknesses in these systems, while blue-teaming works to protect against such adversarial attacks.
This work aims to bridge the gap between academic insights and practical security measures for the protection of generative AI systems.
arXiv Detail & Related papers (2024-09-23T10:18:10Z) - Evaluating the Vulnerabilities in ML systems in terms of adversarial
attacks [0.0]
New adversarial attacks methods may pose challenges to current deep learning cyber defense systems.
Authors explore the consequences of vulnerabilities in AI systems.
It is important to train the AI systems appropriately when they are in testing phase and getting them ready for broader use.
arXiv Detail & Related papers (2023-08-24T16:46:01Z) - Graph Mining for Cybersecurity: A Survey [61.505995908021525]
The explosive growth of cyber attacks nowadays, such as malware, spam, and intrusions, caused severe consequences on society.
Traditional Machine Learning (ML) based methods are extensively used in detecting cyber threats, but they hardly model the correlations between real-world cyber entities.
With the proliferation of graph mining techniques, many researchers investigated these techniques for capturing correlations between cyber entities and achieving high performance.
arXiv Detail & Related papers (2023-04-02T08:43:03Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Deep Reinforcement Learning for Cybersecurity Threat Detection and
Protection: A Review [1.933681537640272]
Deep and machine learning-based solutions have been used in threat detection and protection.
Deep Reinforcement Learning has shown great promise in developing AI-based solutions for areas that had earlier required advanced human cognizance.
Unlike supervised machines and deep learning, deep reinforcement learning is used in more diverse ways and is empowering many innovative applications in the threat defense landscape.
arXiv Detail & Related papers (2022-06-06T16:42:00Z) - Adversarial Machine Learning for Cybersecurity and Computer Vision:
Current Developments and Challenges [2.132096006921048]
Research in adversarial machine learning addresses a significant threat to the wide application of machine learning techniques.
We first discuss three main categories of attacks against machine learning techniques -- poisoning attacks, evasion attacks, and privacy attacks.
We notice adversarial samples in cybersecurity and computer vision are fundamentally different.
arXiv Detail & Related papers (2021-06-30T03:05:58Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Review: Deep Learning Methods for Cybersecurity and Intrusion Detection
Systems [6.459380657702644]
Artificial Intelligence (AI) and Machine Learning (ML) can be leveraged as key enabling technologies for cyber-defense.
In this paper, we are concerned with the investigation of the various deep learning techniques employed for network intrusion detection.
arXiv Detail & Related papers (2020-12-04T23:09:35Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - The Threat of Adversarial Attacks on Machine Learning in Network
Security -- A Survey [4.164845768197488]
Applications of machine learning in network security face a more disproportionate threat of active adversarial attacks compared to other domains.
In this survey, we first provide a taxonomy of machine learning techniques, tasks, and depth.
We examine various adversarial attacks against machine learning in network security and introduce two classification approaches for adversarial attacks in network security.
arXiv Detail & Related papers (2019-11-06T20:29:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.