A Federated Learning Approach for Multi-stage Threat Analysis in Advanced Persistent Threat Campaigns
- URL: http://arxiv.org/abs/2406.13186v1
- Date: Wed, 19 Jun 2024 03:34:41 GMT
- Title: A Federated Learning Approach for Multi-stage Threat Analysis in Advanced Persistent Threat Campaigns
- Authors: Florian Nelles, Abbas Yazdinejad, Ali Dehghantanha, Reza M. Parizi, Gautam Srivastava,
- Abstract summary: Multi-stage threats like advanced persistent threats (APT) pose severe risks by stealing data and destroying infrastructure.
APTs use novel attack vectors and evade signature-based detection by obfuscating their network presence.
This paper proposes a novel 3-phase unsupervised federated learning (FL) framework to detect APTs.
- Score: 25.97800399318373
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-stage threats like advanced persistent threats (APT) pose severe risks by stealing data and destroying infrastructure, with detection being challenging. APTs use novel attack vectors and evade signature-based detection by obfuscating their network presence, often going unnoticed due to their novelty. Although machine learning models offer high accuracy, they still struggle to identify true APT behavior, overwhelming analysts with excessive data. Effective detection requires training on multiple datasets from various clients, which introduces privacy issues under regulations like GDPR. To address these challenges, this paper proposes a novel 3-phase unsupervised federated learning (FL) framework to detect APTs. It identifies unique log event types, extracts suspicious patterns from related log events, and orders them by complexity and frequency. The framework ensures privacy through a federated approach and enhances security using Paillier's partial homomorphic encryption. Tested on the SoTM 34 dataset, our framework compares favorably against traditional methods, demonstrating efficient pattern extraction and analysis from log files, reducing analyst workload, and maintaining stringent data privacy. This approach addresses significant gaps in current methodologies, offering a robust solution to APT detection in compliance with privacy laws.
Related papers
- APT-LLM: Embedding-Based Anomaly Detection of Cyber Advanced Persistent Threats Using Large Language Models [4.956245032674048]
APTs pose a major cybersecurity challenge due to their stealth and ability to mimic normal system behavior.
This paper introduces APT-LLM, a novel embedding-based anomaly detection framework.
It integrates large language models (LLMs) with autoencoder architectures to detect APTs.
arXiv Detail & Related papers (2025-02-13T15:01:18Z) - A Study on the Importance of Features in Detecting Advanced Persistent Threats Using Machine Learning [6.144680854063938]
Advanced Persistent Threats (APTs) pose a significant security risk to organizations and industries.
Mitigating these sophisticated attacks is highly challenging due to the stealthy and persistent nature of APTs.
This paper aims to analyze measurements considered when recording network traffic and conclude which features contribute more to detecting APT samples.
arXiv Detail & Related papers (2025-02-11T03:06:03Z) - CONTINUUM: Detecting APT Attacks through Spatial-Temporal Graph Neural Networks [0.9553673944187253]
Advanced Persistent Threats (APTs) represent a significant challenge in cybersecurity.
Traditional Intrusion Detection Systems (IDS) often fall short in detecting these multi-stage attacks.
arXiv Detail & Related papers (2025-01-06T12:43:59Z) - Hack Me If You Can: Aggregating AutoEncoders for Countering Persistent Access Threats Within Highly Imbalanced Data [4.619717316983648]
Advanced Persistent Threats (APTs) are sophisticated, targeted cyberattacks designed to gain unauthorized access to systems and remain undetected for extended periods.
We present AE-APT, a deep learning-based tool for APT detection that features a family of AutoEncoder methods ranging from a basic one to a Transformer-based one.
The outcomes showed that AE-APT has significantly higher detection rates compared to its competitors, indicating superior performance in detecting and ranking anomalies.
arXiv Detail & Related papers (2024-06-27T14:45:38Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Model Stealing Attack against Graph Classification with Authenticity, Uncertainty and Diversity [80.16488817177182]
GNNs are vulnerable to the model stealing attack, a nefarious endeavor geared towards duplicating the target model via query permissions.
We introduce three model stealing attacks to adapt to different actual scenarios.
arXiv Detail & Related papers (2023-12-18T05:42:31Z) - Text generation for dataset augmentation in security classification
tasks [55.70844429868403]
This study evaluates the application of natural language text generators to fill this data gap in multiple security-related text classification tasks.
We find substantial benefits for GPT-3 data augmentation strategies in situations with severe limitations on known positive-class samples.
arXiv Detail & Related papers (2023-10-22T22:25:14Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - On the Universal Adversarial Perturbations for Efficient Data-free
Adversarial Detection [55.73320979733527]
We propose a data-agnostic adversarial detection framework, which induces different responses between normal and adversarial samples to UAPs.
Experimental results show that our method achieves competitive detection performance on various text classification tasks.
arXiv Detail & Related papers (2023-06-27T02:54:07Z) - Exploring Memorization in Adversarial Training [58.38336773082818]
We investigate the memorization effect in adversarial training (AT) for promoting a deeper understanding of capacity, convergence, generalization, and especially robust overfitting.
We propose a new mitigation algorithm motivated by detailed memorization analyses.
arXiv Detail & Related papers (2021-06-03T05:39:57Z) - RANK: AI-assisted End-to-End Architecture for Detecting Persistent
Attacks in Enterprise Networks [2.294014185517203]
We present an end-to-end AI-assisted architecture for detecting Advanced Persistent Threats (APTs)
The architecture is composed of four consecutive steps: 1) alert templating and merging, 2) alert graph construction, 3) alert graph partitioning into incidents, and 4) incident scoring and ordering.
Extensive results are provided showing a three order of magnitude reduction in the amount of data to be reviewed by the analyst, innovative extraction of incidents and security-wise scoring of extracted incidents.
arXiv Detail & Related papers (2021-01-06T15:59:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.