The State-of-the-Art in AI-Based Malware Detection Techniques: A Review
- URL: http://arxiv.org/abs/2210.11239v1
- Date: Wed, 12 Oct 2022 16:44:52 GMT
- Title: The State-of-the-Art in AI-Based Malware Detection Techniques: A Review
- Authors: Adam Wolsey
- Abstract summary: This review aims to outline the state-of-the-art AI techniques used in malware detection and prevention.
The algorithms investigated consist of Shallow Learning, Deep Learning and Bio-Inspired Computing.
The survey also touches on the rapid adoption of AI by cybercriminals as a means to create ever more advanced malware.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial Intelligence techniques have evolved rapidly in recent years,
revolutionising the approaches used to fight against cybercriminals. But as the
cyber security field has progressed, so has malware development, making it an
economic imperative to strengthen businesses' defensive capability against
malware attacks. This review aims to outline the state-of-the-art AI techniques
used in malware detection and prevention, providing an in-depth analysis of the
latest studies in this field. The algorithms investigated consist of Shallow
Learning, Deep Learning and Bio-Inspired Computing, applied to a variety of
platforms, such as PC, cloud, Android and IoT. This survey also touches on the
rapid adoption of AI by cybercriminals as a means to create ever more advanced
malware and exploit the AI algorithms designed to defend against them.
Related papers
- Explainable Malware Analysis: Concepts, Approaches and Challenges [0.0]
We review the current state-of-the-art ML-based malware detection techniques and popular XAI approaches.
We discuss research implementations and the challenges of explainable malware analysis.
This theoretical survey serves as an entry point for researchers interested in XAI applications in malware detection.
arXiv Detail & Related papers (2024-09-09T08:19:33Z) - Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks [0.0]
This paper delves into the escalating threat posed by the misuse of AI, specifically through the use of Large Language Models (LLMs)
Through a series of controlled experiments, the paper demonstrates how these models can be manipulated to bypass ethical and privacy safeguards to effectively generate cyber attacks.
We also introduce Occupy AI, a customized, finetuned LLM specifically engineered to automate and execute cyberattacks.
arXiv Detail & Related papers (2024-08-23T02:56:13Z) - Review of Generative AI Methods in Cybersecurity [0.6990493129893112]
This paper provides a comprehensive overview of the current state-of-the-art deployments of Generative AI (GenAI)
It covers assaults, jailbreaking, and applications of prompt injection and reverse psychology.
It also provides the various applications of GenAI in cybercrimes, such as automated hacking, phishing emails, social engineering, reverse cryptography, creating attack payloads, and creating malware.
arXiv Detail & Related papers (2024-03-13T17:05:05Z) - Towards more Practical Threat Models in Artificial Intelligence Security [66.67624011455423]
Recent works have identified a gap between research and practice in artificial intelligence security.
We revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice.
arXiv Detail & Related papers (2023-11-16T16:09:44Z) - Graph Mining for Cybersecurity: A Survey [61.505995908021525]
The explosive growth of cyber attacks nowadays, such as malware, spam, and intrusions, caused severe consequences on society.
Traditional Machine Learning (ML) based methods are extensively used in detecting cyber threats, but they hardly model the correlations between real-world cyber entities.
With the proliferation of graph mining techniques, many researchers investigated these techniques for capturing correlations between cyber entities and achieving high performance.
arXiv Detail & Related papers (2023-04-02T08:43:03Z) - Explainable Artificial Intelligence and Cybersecurity: A Systematic
Literature Review [0.799536002595393]
XAI aims to make the operation of AI algorithms more interpretable for its users and developers.
This work seeks to investigate the current research scenario on XAI applied to cybersecurity.
arXiv Detail & Related papers (2023-02-27T17:47:56Z) - Malware Detection and Prevention using Artificial Intelligence
Techniques [7.583480439784955]
Security has become a major issue due to the increase in malware activity.
In this study, we emphasize Artificial Intelligence (AI) based techniques for detecting and preventing malware activity.
arXiv Detail & Related papers (2022-06-26T02:41:46Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Adversarial Attacks against Windows PE Malware Detection: A Survey of
the State-of-the-Art [44.975088044180374]
This paper focuses on malware with the file format of portable executable (PE) in the family of Windows operating systems, namely Windows PE malware.
We first outline the general learning framework of Windows PE malware detection based on ML/DL.
We then highlight three unique challenges of performing adversarial attacks in the context of PE malware.
arXiv Detail & Related papers (2021-12-23T02:12:43Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.