DeepMAL -- Deep Learning Models for Malware Traffic Detection and
Classification
- URL: http://arxiv.org/abs/2003.04079v2
- Date: Tue, 10 Mar 2020 16:56:46 GMT
- Title: DeepMAL -- Deep Learning Models for Malware Traffic Detection and
Classification
- Authors: Gonzalo Mar\'in, Pedro Casas, Germ\'an Capdehourat
- Abstract summary: We introduce DeepMAL, a DL model which is able to capture the underlying statistics of malicious traffic.
We show that DeepMAL can detect and classify malware flows with high accuracy, outperforming traditional, shallow-like models.
- Score: 4.187494796512101
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robust network security systems are essential to prevent and mitigate the
harming effects of the ever-growing occurrence of network attacks. In recent
years, machine learning-based systems have gain popularity for network security
applications, usually considering the application of shallow models, which rely
on the careful engineering of expert, handcrafted input features. The main
limitation of this approach is that handcrafted features can fail to perform
well under different scenarios and types of attacks. Deep Learning (DL) models
can solve this limitation using their ability to learn feature representations
from raw, non-processed data. In this paper we explore the power of DL models
on the specific problem of detection and classification of malware network
traffic. As a major advantage with respect to the state of the art, we consider
raw measurements coming directly from the stream of monitored bytes as input to
the proposed models, and evaluate different raw-traffic feature
representations, including packet and flow-level ones. We introduce DeepMAL, a
DL model which is able to capture the underlying statistics of malicious
traffic, without any sort of expert handcrafted features. Using publicly
available traffic traces containing different families of malware traffic, we
show that DeepMAL can detect and classify malware flows with high accuracy,
outperforming traditional, shallow-like models.
Related papers
- Revolutionizing Payload Inspection: A Self-Supervised Journey to Precision with Few Shots [0.0]
Traditional security measures are inadequate against the sophistication of modern cyber attacks.
Deep Packet Inspection (DPI) has been pivotal in enhancing network security.
integration of advanced deep learning techniques with DPI has introduced modern methodologies into malware detection.
arXiv Detail & Related papers (2024-09-26T18:55:52Z) - Towards Novel Malicious Packet Recognition: A Few-Shot Learning Approach [0.0]
Deep Packet Inspection (DPI) has emerged as a key technology in strengthening network security.
This study proposes a novel approach that leverages a large language model (LLM) and few-shot learning.
Our approach shows promising results with an average accuracy of 86.35% and F1-Score of 86.40% on different malware types.
arXiv Detail & Related papers (2024-09-17T15:02:32Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Identifying and Mitigating Model Failures through Few-shot CLIP-aided
Diffusion Generation [65.268245109828]
We propose an end-to-end framework to generate text descriptions of failure modes associated with spurious correlations.
These descriptions can be used to generate synthetic data using generative models, such as diffusion models.
Our experiments have shown remarkable textbfimprovements in accuracy ($sim textbf21%$) on hard sub-populations.
arXiv Detail & Related papers (2023-12-09T04:43:49Z) - When a RF Beats a CNN and GRU, Together -- A Comparison of Deep Learning
and Classical Machine Learning Approaches for Encrypted Malware Traffic
Classification [4.495583520377878]
We show that in the case of malicious traffic classification, state-of-the-art DL-based solutions do not necessarily outperform the classical ML-based ones.
We exemplify this finding using two well-known datasets for a varied set of tasks, such as: malware detection, malware family classification, detection of zero-day attacks, and classification of an iteratively growing dataset.
arXiv Detail & Related papers (2022-06-16T08:59:53Z) - A Review of Confidentiality Threats Against Embedded Neural Network
Models [0.0]
This review focuses on attacks targeting the confidentiality of embedded Deep Neural Network (DNN) models.
We highlight the fact that Side-Channel Analysis (SCA) is a relatively unexplored bias by which model's confidentiality can be compromised.
arXiv Detail & Related papers (2021-05-04T10:27:20Z) - Deep Learning and Traffic Classification: Lessons learned from a
commercial-grade dataset with hundreds of encrypted and zero-day applications [72.02908263225919]
We share our experience on a commercial-grade DL traffic classification engine.
We identify known applications from encrypted traffic, as well as unknown zero-day applications.
We propose a novel technique, tailored for DL models, that is significantly more accurate and light-weight than the state of the art.
arXiv Detail & Related papers (2021-04-07T15:21:22Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - NF-GNN: Network Flow Graph Neural Networks for Malware Detection and
Classification [11.624780336645006]
Malicious software (malware) poses an increasing threat to the security of communication systems.
We present three variants of our base model, which all support malware detection and classification in supervised and unsupervised settings.
Experiments on four different prediction tasks consistently demonstrate the advantages of our approach and show that our graph neural network model can boost detection performance by a significant margin.
arXiv Detail & Related papers (2021-03-05T20:54:38Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.