Integrating Explainable AI for Effective Malware Detection in Encrypted Network Traffic
- URL: http://arxiv.org/abs/2501.05387v1
- Date: Thu, 09 Jan 2025 17:21:00 GMT
- Title: Integrating Explainable AI for Effective Malware Detection in Encrypted Network Traffic
- Authors: Sileshi Nibret Zeleke, Amsalu Fentie Jember, Mario Bochicchio,
- Abstract summary: Encrypted network communication ensures confidentiality, integrity, and privacy between endpoints.
In this study, we investigate the integration of explainable artificial intelligence (XAI) techniques to detect malicious network traffic.
We employ ensemble learning models to identify malicious activity using multi-view features extracted from various aspects of encrypted communication.
- Score: 0.0
- License:
- Abstract: Encrypted network communication ensures confidentiality, integrity, and privacy between endpoints. However, attackers are increasingly exploiting encryption to conceal malicious behavior. Detecting unknown encrypted malicious traffic without decrypting the payloads remains a significant challenge. In this study, we investigate the integration of explainable artificial intelligence (XAI) techniques to detect malicious network traffic. We employ ensemble learning models to identify malicious activity using multi-view features extracted from various aspects of encrypted communication. To effectively represent malicious communication, we compiled a robust dataset with 1,127 unique connections, more than any other available open-source dataset, and spanning 54 malware families. Our models were benchmarked against the CTU-13 dataset, achieving performance of over 99% accuracy, precision, and F1-score. Additionally, the eXtreme Gradient Boosting (XGB) model demonstrated 99.32% accuracy, 99.53% precision, and 99.43% F1-score on our custom dataset. By leveraging Shapley Additive Explanations (SHAP), we identified that the maximum packet size, mean inter-arrival time of packets, and transport layer security version used are the most critical features for the global model explanation. Furthermore, key features were identified as important for local explanations across both datasets for individual traffic samples. These insights provide a deeper understanding of the model decision-making process, enhancing the transparency and reliability of detecting malicious encrypted traffic.
Related papers
- Hiding in Plain Sight: An IoT Traffic Camouflage Framework for Enhanced Privacy [2.0257616108612373]
Existing single-technique obfuscation methods, such as packet padding, often fall short in dynamic environments like smart homes.
This paper introduces a multi-technique obfuscation framework designed to enhance privacy by disrupting traffic analysis.
arXiv Detail & Related papers (2025-01-26T04:33:44Z) - MIETT: Multi-Instance Encrypted Traffic Transformer for Encrypted Traffic Classification [59.96233305733875]
Classifying traffic is essential for detecting security threats and optimizing network management.
We propose a Multi-Instance Encrypted Traffic Transformer (MIETT) to capture both token-level and packet-level relationships.
MIETT achieves results across five datasets, demonstrating its effectiveness in classifying encrypted traffic and understanding complex network behaviors.
arXiv Detail & Related papers (2024-12-19T12:52:53Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - KiNETGAN: Enabling Distributed Network Intrusion Detection through Knowledge-Infused Synthetic Data Generation [0.0]
We propose a knowledge-infused Generative Adversarial Network for generating synthetic network activity data (KiNETGAN)
Our approach enhances the resilience of distributed intrusion detection while addressing privacy concerns.
arXiv Detail & Related papers (2024-05-26T08:02:02Z) - A Transformer-Based Framework for Payload Malware Detection and Classification [0.0]
Techniques such as Deep Packet Inspection (DPI) have been introduced to allow IDSs analyze the content of network packets.
In this paper, we propose a revolutionary DPI algorithm based on transformers adapted for the purpose of detecting malicious traffic.
arXiv Detail & Related papers (2024-03-27T03:25:45Z) - Privacy-Preserving Intrusion Detection in Software-defined VANET using Federated Learning with BERT [0.0]
The present study introduces a novel approach for intrusion detection using Federated Learning (FL) capabilities.
FL-BERT has yielded promising results, opening avenues for further investigation in this particular area of research.
Our results suggest that FL-BERT is a promising technique for enhancing attack detection.
arXiv Detail & Related papers (2024-01-14T18:32:25Z) - Machine Learning for Encrypted Malicious Traffic Detection: Approaches,
Datasets and Comparative Study [6.267890584151111]
In post-COVID-19 environment, malicious traffic encryption is growing rapidly.
We formulate a universal framework of machine learning based encrypted malicious traffic detection techniques.
We implement and compare 10 encrypted malicious traffic detection algorithms.
arXiv Detail & Related papers (2022-03-17T14:00:55Z) - Active Learning of Neural Collision Handler for Complex 3D Mesh
Deformations [68.0524382279567]
We present a robust learning algorithm to detect and handle collisions in 3D deforming meshes.
Our approach outperforms supervised learning methods and achieves $93.8-98.1%$ accuracy.
arXiv Detail & Related papers (2021-10-08T04:08:31Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.