Harnessing the Speed and Accuracy of Machine Learning to Advance Cybersecurity
- URL: http://arxiv.org/abs/2302.12415v3
- Date: Sat, 2 Mar 2024 10:47:19 GMT
- Title: Harnessing the Speed and Accuracy of Machine Learning to Advance Cybersecurity
- Authors: Khatoon Mohammed,
- Abstract summary: Traditional signature-based methods of malware detection have limitations in detecting complex threats.
In recent years, machine learning has emerged as a promising solution to detect malware effectively.
ML algorithms are capable of analyzing large datasets and identifying patterns that are difficult for humans to identify.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As cyber attacks continue to increase in frequency and sophistication, detecting malware has become a critical task for maintaining the security of computer systems. Traditional signature-based methods of malware detection have limitations in detecting complex and evolving threats. In recent years, machine learning (ML) has emerged as a promising solution to detect malware effectively. ML algorithms are capable of analyzing large datasets and identifying patterns that are difficult for humans to identify. This paper presents a comprehensive review of the state-of-the-art ML techniques used in malware detection, including supervised and unsupervised learning, deep learning, and reinforcement learning. We also examine the challenges and limitations of ML-based malware detection, such as the potential for adversarial attacks and the need for large amounts of labeled data. Furthermore, we discuss future directions in ML-based malware detection, including the integration of multiple ML algorithms and the use of explainable AI techniques to enhance the interpret ability of ML-based detection systems. Our research highlights the potential of ML-based techniques to improve the speed and accuracy of malware detection, and contribute to enhancing cybersecurity
Related papers
- Explainable Malware Analysis: Concepts, Approaches and Challenges [0.0]
We review the current state-of-the-art ML-based malware detection techniques and popular XAI approaches.
We discuss research implementations and the challenges of explainable malware analysis.
This theoretical survey serves as an entry point for researchers interested in XAI applications in malware detection.
arXiv Detail & Related papers (2024-09-09T08:19:33Z) - Transfer Learning in Pre-Trained Large Language Models for Malware Detection Based on System Calls [3.5698678013121334]
This work presents a novel framework leveraging large language models (LLMs) to classify malware based on system call data.
Experiments with a dataset of over 1TB of system calls demonstrate that models with larger context sizes, such as BigBird and Longformer, achieve superior accuracy and F1-Score of approximately 0.86.
This approach shows significant potential for real-time detection in high-stakes environments, offering a robust solution to evolving cyber threats.
arXiv Detail & Related papers (2024-05-15T13:19:43Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - A survey on hardware-based malware detection approaches [45.24207460381396]
Hardware-based malware detection approaches leverage hardware performance counters and machine learning prowess.
We meticulously analyze the approach, unraveling the most common methods, algorithms, tools, and datasets that shape its contours.
The discussion extends to crafting mixed hardware and software approaches for collaborative efficacy, essential enhancements in hardware monitoring units, and a better understanding of the correlation between hardware events and malware applications.
arXiv Detail & Related papers (2023-03-22T13:00:41Z) - Robustness Evaluation of Deep Unsupervised Learning Algorithms for
Intrusion Detection Systems [0.0]
This paper evaluates the robustness of six recent deep learning algorithms for intrusion detection on contaminated data.
Our experiments suggest that the state-of-the-art algorithms used in this study are sensitive to data contamination and reveal the importance of self-defense against data perturbation.
arXiv Detail & Related papers (2022-06-25T02:28:39Z) - Mate! Are You Really Aware? An Explainability-Guided Testing Framework
for Robustness of Malware Detectors [49.34155921877441]
We propose an explainability-guided and model-agnostic testing framework for robustness of malware detectors.
We then use this framework to test several state-of-the-art malware detectors' abilities to detect manipulated malware.
Our findings shed light on the limitations of current malware detectors, as well as how they can be improved.
arXiv Detail & Related papers (2021-11-19T08:02:38Z) - ML-based IoT Malware Detection Under Adversarial Settings: A Systematic
Evaluation [9.143713488498513]
This work systematically examines the state-of-the-art malware detection approaches, that utilize various representation and learning techniques.
We show that software mutations with functionality-preserving operations, such as stripping and padding, significantly deteriorate the accuracy of such detectors.
arXiv Detail & Related papers (2021-08-30T16:54:07Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Interpreting Machine Learning Malware Detectors Which Leverage N-gram
Analysis [2.6397379133308214]
cybersecurity analysts always prefer solutions that are as interpretable and understandable as rule-based or signature-based detection.
The objective of this paper is to evaluate the current state-of-the-art ML models interpretability techniques when applied to ML-based malware detectors.
arXiv Detail & Related papers (2020-01-27T19:10:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.