Interpreting Machine Learning Malware Detectors Which Leverage N-gram
Analysis
- URL: http://arxiv.org/abs/2001.10916v1
- Date: Mon, 27 Jan 2020 19:10:50 GMT
- Title: Interpreting Machine Learning Malware Detectors Which Leverage N-gram
Analysis
- Authors: William Briguglio and Sherif Saad
- Abstract summary: cybersecurity analysts always prefer solutions that are as interpretable and understandable as rule-based or signature-based detection.
The objective of this paper is to evaluate the current state-of-the-art ML models interpretability techniques when applied to ML-based malware detectors.
- Score: 2.6397379133308214
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In cyberattack detection and prevention systems, cybersecurity analysts
always prefer solutions that are as interpretable and understandable as
rule-based or signature-based detection. This is because of the need to tune
and optimize these solutions to mitigate and control the effect of false
positives and false negatives. Interpreting machine learning models is a new
and open challenge. However, it is expected that an interpretable machine
learning solution will be domain-specific. For instance, interpretable
solutions for machine learning models in healthcare are different than
solutions in malware detection. This is because the models are complex, and
most of them work as a black-box. Recently, the increased ability for malware
authors to bypass antimalware systems has forced security specialists to look
to machine learning for creating robust detection systems. If these systems are
to be relied on in the industry, then, among other challenges, they must also
explain their predictions. The objective of this paper is to evaluate the
current state-of-the-art ML models interpretability techniques when applied to
ML-based malware detectors. We demonstrate interpretability techniques in
practice and evaluate the effectiveness of existing interpretability techniques
in the malware analysis domain.
Related papers
- Explainable Malware Analysis: Concepts, Approaches and Challenges [0.0]
We review the current state-of-the-art ML-based malware detection techniques and popular XAI approaches.
We discuss research implementations and the challenges of explainable malware analysis.
This theoretical survey serves as an entry point for researchers interested in XAI applications in malware detection.
arXiv Detail & Related papers (2024-09-09T08:19:33Z) - Verification of Machine Unlearning is Fragile [48.71651033308842]
We introduce two novel adversarial unlearning processes capable of circumventing both types of verification strategies.
This study highlights the vulnerabilities and limitations in machine unlearning verification, paving the way for further research into the safety of machine unlearning.
arXiv Detail & Related papers (2024-08-01T21:37:10Z) - Comprehensive evaluation of Mal-API-2019 dataset by machine learning in malware detection [0.5475886285082937]
This study conducts a thorough examination of malware detection using machine learning techniques.
The aim is to advance cybersecurity capabilities by identifying and mitigating threats more effectively.
arXiv Detail & Related papers (2024-03-04T17:22:43Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Harnessing the Speed and Accuracy of Machine Learning to Advance Cybersecurity [0.0]
Traditional signature-based methods of malware detection have limitations in detecting complex threats.
In recent years, machine learning has emerged as a promising solution to detect malware effectively.
ML algorithms are capable of analyzing large datasets and identifying patterns that are difficult for humans to identify.
arXiv Detail & Related papers (2023-02-24T02:42:38Z) - ML-based IoT Malware Detection Under Adversarial Settings: A Systematic
Evaluation [9.143713488498513]
This work systematically examines the state-of-the-art malware detection approaches, that utilize various representation and learning techniques.
We show that software mutations with functionality-preserving operations, such as stripping and padding, significantly deteriorate the accuracy of such detectors.
arXiv Detail & Related papers (2021-08-30T16:54:07Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - Towards interpreting ML-based automated malware detection models: a
survey [4.721069729610892]
Most of the existing machine learning models are black-box, which made their pre-diction results undependable.
This paper aims to examine and categorize the existing researches on ML-based malware detector interpretability.
arXiv Detail & Related papers (2021-01-15T17:34:40Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Adversarial Attacks on Machine Learning Systems for High-Frequency
Trading [55.30403936506338]
We study valuation models for algorithmic trading from the perspective of adversarial machine learning.
We introduce new attacks specific to this domain with size constraints that minimize attack costs.
We discuss how these attacks can be used as an analysis tool to study and evaluate the robustness properties of financial models.
arXiv Detail & Related papers (2020-02-21T22:04:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.