Interpretable Anomaly Detection in Encrypted Traffic Using SHAP with Machine Learning Models
- URL: http://arxiv.org/abs/2505.16261v1
- Date: Thu, 22 May 2025 05:50:39 GMT
- Title: Interpretable Anomaly Detection in Encrypted Traffic Using SHAP with Machine Learning Models
- Authors: Kalindi Singh, Aayush Kashyap, Aswani Kumar Cherukuri,
- Abstract summary: This study aims to develop an interpretable machine learning-based framework for anomaly detection in encrypted network traffic.<n>Models are trained and evaluated on three benchmark encrypted traffic datasets.<n> SHAP visualizations successfully revealed the most influential traffic features contributing to anomaly predictions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The widespread adoption of encrypted communication protocols such as HTTPS and TLS has enhanced data privacy but also rendered traditional anomaly detection techniques less effective, as they often rely on inspecting unencrypted payloads. This study aims to develop an interpretable machine learning-based framework for anomaly detection in encrypted network traffic. This study proposes a model-agnostic framework that integrates multiple machine learning classifiers, with SHapley Additive exPlanations SHAP to ensure post-hoc model interpretability. The models are trained and evaluated on three benchmark encrypted traffic datasets. Performance is assessed using standard classification metrics, and SHAP is used to explain model predictions by attributing importance to individual input features. SHAP visualizations successfully revealed the most influential traffic features contributing to anomaly predictions, enhancing the transparency and trustworthiness of the models. Unlike conventional approaches that treat machine learning as a black box, this work combines robust classification techniques with explainability through SHAP, offering a novel interpretable anomaly detection system tailored for encrypted traffic environments. While the framework is generalizable, real-time deployment and performance under adversarial conditions require further investigation. Future work may explore adaptive models and real-time interpretability in operational network environments. This interpretable anomaly detection framework can be integrated into modern security operations for encrypted environments, allowing analysts not only to detect anomalies with high precision but also to understand why a model made a particular decision a crucial capability in compliance-driven and mission-critical settings.
Related papers
- Exploiting Edge Features for Transferable Adversarial Attacks in Distributed Machine Learning [54.26807397329468]
This work explores a previously overlooked vulnerability in distributed deep learning systems.<n>An adversary who intercepts the intermediate features transmitted between them can still pose a serious threat.<n>We propose an exploitation strategy specifically designed for distributed settings.
arXiv Detail & Related papers (2025-07-09T20:09:00Z) - Self-Supervised Transformer-based Contrastive Learning for Intrusion Detection Systems [1.1265248232450553]
This paper proposes a self-supervised contrastive learning approach for generalizable intrusion detection on raw packet sequences.<n>Our framework exhibits better performance in comparison to existing NetFlow self-supervised methods.<n>Our model provides a strong baseline for supervised intrusion detection with limited labeled data.
arXiv Detail & Related papers (2025-05-12T13:42:00Z) - Research on Cloud Platform Network Traffic Monitoring and Anomaly Detection System based on Large Language Models [5.524069089627854]
This paper introduces a large language model (LLM)-based network traffic monitoring and anomaly detection system.<n>A pre-trained large language model analyzes and predicts the probable network traffic, and an anomaly detection layer considers temporality and context.<n>Results show that the designed model outperforms traditional methods in detection accuracy and computational efficiency.
arXiv Detail & Related papers (2025-04-22T07:42:07Z) - Counterfactual Explanation for Auto-Encoder Based Time-Series Anomaly Detection [0.3199881502576702]
Auto-Encoders exhibit inherent opaqueness in their decision-making processes, hindering their practical implementation at scale.<n>In this work, we employ a feature selector to select features and counterfactual explanations to give a context to the model output.<n>Our experimental findings illustrate that our proposed counterfactual approach can offer meaningful and valuable insights into the model decision-making process.
arXiv Detail & Related papers (2025-01-03T19:30:11Z) - Unsupervised Model Diagnosis [49.36194740479798]
This paper proposes Unsupervised Model Diagnosis (UMO) to produce semantic counterfactual explanations without any user guidance.
Our approach identifies and visualizes changes in semantics, and then matches these changes to attributes from wide-ranging text sources.
arXiv Detail & Related papers (2024-10-08T17:59:03Z) - Open-Set Deepfake Detection: A Parameter-Efficient Adaptation Method with Forgery Style Mixture [58.60915132222421]
We introduce an approach that is both general and parameter-efficient for face forgery detection.
We design a forgery-style mixture formulation that augments the diversity of forgery source domains.
We show that the designed model achieves state-of-the-art generalizability with significantly reduced trainable parameters.
arXiv Detail & Related papers (2024-08-23T01:53:36Z) - DECIDER: Leveraging Foundation Model Priors for Improved Model Failure Detection and Explanation [18.77296551727931]
We propose DECIDER, a novel approach that leverages priors from large language models (LLMs) and vision-language models (VLMs) to detect failures in image models.
DECIDER consistently achieves state-of-the-art failure detection performance, significantly outperforming baselines in terms of the overall Matthews correlation coefficient.
arXiv Detail & Related papers (2024-08-01T07:08:11Z) - Enhancing Intrusion Detection In Internet Of Vehicles Through Federated
Learning [0.0]
Federated learning allows multiple parties to collaborate and learn a shared model without sharing their raw data.
Our paper proposes a federated learning framework for intrusion detection in Internet of Vehicles (IOVs) using the CIC-IDS 2017 dataset.
arXiv Detail & Related papers (2023-11-23T04:04:20Z) - IoTGeM: Generalizable Models for Behaviour-Based IoT Attack Detection [3.3772986620114387]
We present an approach for modelling IoT network attacks that focuses on generalizability, yet also leads to better detection and performance.
First, we present an improved rolling window approach for feature extraction, and introduce a multi-step feature selection process that reduces overfitting.
Second, we build and test models using isolated train and test datasets, thereby avoiding common data leaks.
Third, we rigorously evaluate our methodology using a diverse portfolio of machine learning models, evaluation metrics and datasets.
arXiv Detail & Related papers (2023-10-17T21:46:43Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Leveraging a Probabilistic PCA Model to Understand the Multivariate
Statistical Network Monitoring Framework for Network Security Anomaly
Detection [64.1680666036655]
We revisit anomaly detection techniques based on PCA from a probabilistic generative model point of view.
We have evaluated the mathematical model using two different datasets.
arXiv Detail & Related papers (2023-02-02T13:41:18Z) - CC-Cert: A Probabilistic Approach to Certify General Robustness of
Neural Networks [58.29502185344086]
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks.
It is important to provide provable guarantees for deep learning models against semantically meaningful input transformations.
We propose a new universal probabilistic certification approach based on Chernoff-Cramer bounds.
arXiv Detail & Related papers (2021-09-22T12:46:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.