A Study on the Importance of Features in Detecting Advanced Persistent Threats Using Machine Learning
- URL: http://arxiv.org/abs/2502.07207v1
- Date: Tue, 11 Feb 2025 03:06:03 GMT
- Title: A Study on the Importance of Features in Detecting Advanced Persistent Threats Using Machine Learning
- Authors: Ehsan Hallaji, Roozbeh Razavi-Far, Mehrdad Saif,
- Abstract summary: Advanced Persistent Threats (APTs) pose a significant security risk to organizations and industries.
Mitigating these sophisticated attacks is highly challenging due to the stealthy and persistent nature of APTs.
This paper aims to analyze measurements considered when recording network traffic and conclude which features contribute more to detecting APT samples.
- Score: 6.144680854063938
- License:
- Abstract: Advanced Persistent Threats (APTs) pose a significant security risk to organizations and industries. These attacks often lead to severe data breaches and compromise the system for a long time. Mitigating these sophisticated attacks is highly challenging due to the stealthy and persistent nature of APTs. Machine learning models are often employed to tackle this challenge by bringing automation and scalability to APT detection. Nevertheless, these intelligent methods are data-driven, and thus, highly affected by the quality and relevance of input data. This paper aims to analyze measurements considered when recording network traffic and conclude which features contribute more to detecting APT samples. To do this, we study the features associated with various APT cases and determine their importance using a machine learning framework. To ensure the generalization of our findings, several feature selection techniques are employed and paired with different classifiers to evaluate their effectiveness. Our findings provide insights into how APT detection can be enhanced in real-world scenarios.
Related papers
- Exploring Feature Importance and Explainability Towards Enhanced ML-Based DoS Detection in AI Systems [3.3150909292716477]
Denial of Service (DoS) attacks pose a significant threat in the realm of AI systems security.
statistical and machine learning (ML)-based DoS classification and detection approaches utilize a broad range of feature selection mechanisms to select a feature subset from networking traffic datasets.
In this paper, we investigate the importance of feature selection in improving ML-based detection of DoS attacks.
arXiv Detail & Related papers (2024-11-04T19:51:08Z) - CICAPT-IIOT: A provenance-based APT attack dataset for IIoT environment [1.841560106836332]
Industrial Internet of Things (IIoT) is a transformative paradigm that integrates smart sensors, advanced analytics, and robust connectivity within industrial processes.
Advanced Persistent Threats (APTs) pose a particularly grave concern due to their stealthy, prolonged, and targeted nature.
CICAPT-IIoT dataset presents foundation for developing holistic cybersecurity measures.
arXiv Detail & Related papers (2024-07-15T23:08:34Z) - A Federated Learning Approach for Multi-stage Threat Analysis in Advanced Persistent Threat Campaigns [25.97800399318373]
Multi-stage threats like advanced persistent threats (APT) pose severe risks by stealing data and destroying infrastructure.
APTs use novel attack vectors and evade signature-based detection by obfuscating their network presence.
This paper proposes a novel 3-phase unsupervised federated learning (FL) framework to detect APTs.
arXiv Detail & Related papers (2024-06-19T03:34:41Z) - RAPID: Robust APT Detection and Investigation Using Context-Aware Deep Learning [26.083244046813512]
We introduce a novel deep learning-based method for robust APT detection and investigation.
By utilizing self-supervised sequence learning and iteratively learned embeddings, our approach effectively adapts to dynamic system behavior.
Our evaluation demonstrates RAPID's effectiveness and computational efficiency in real-world scenarios.
arXiv Detail & Related papers (2024-06-08T05:39:24Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Progressing from Anomaly Detection to Automated Log Labeling and
Pioneering Root Cause Analysis [53.24804865821692]
This study introduces a taxonomy for log anomalies and explores automated data labeling to mitigate labeling challenges.
The study envisions a future where root cause analysis follows anomaly detection, unraveling the underlying triggers of anomalies.
arXiv Detail & Related papers (2023-12-22T15:04:20Z) - Text generation for dataset augmentation in security classification
tasks [55.70844429868403]
This study evaluates the application of natural language text generators to fill this data gap in multiple security-related text classification tasks.
We find substantial benefits for GPT-3 data augmentation strategies in situations with severe limitations on known positive-class samples.
arXiv Detail & Related papers (2023-10-22T22:25:14Z) - ECS -- an Interactive Tool for Data Quality Assurance [63.379471124899915]
We present a novel approach for the assurance of data quality.
For this purpose, the mathematical basics are first discussed and the approach is presented using multiple examples.
This results in the detection of data points with potentially harmful properties for the use in safety-critical systems.
arXiv Detail & Related papers (2023-07-10T06:49:18Z) - On the Universal Adversarial Perturbations for Efficient Data-free
Adversarial Detection [55.73320979733527]
We propose a data-agnostic adversarial detection framework, which induces different responses between normal and adversarial samples to UAPs.
Experimental results show that our method achieves competitive detection performance on various text classification tasks.
arXiv Detail & Related papers (2023-06-27T02:54:07Z) - It Is All About Data: A Survey on the Effects of Data on Adversarial
Robustness [4.1310970179750015]
Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to confuse the model into making a mistake.
To address this problem, the area of adversarial robustness investigates mechanisms behind adversarial attacks and defenses against these attacks.
arXiv Detail & Related papers (2023-03-17T04:18:03Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.