RAPID: Robust APT Detection and Investigation Using Context-Aware Deep Learning
- URL: http://arxiv.org/abs/2406.05362v1
- Date: Sat, 8 Jun 2024 05:39:24 GMT
- Title: RAPID: Robust APT Detection and Investigation Using Context-Aware Deep Learning
- Authors: Yonatan Amaru, Prasanna Wudali, Yuval Elovici, Asaf Shabtai,
- Abstract summary: We introduce a novel deep learning-based method for robust APT detection and investigation.
By utilizing self-supervised sequence learning and iteratively learned embeddings, our approach effectively adapts to dynamic system behavior.
Our evaluation demonstrates RAPID's effectiveness and computational efficiency in real-world scenarios.
- Score: 26.083244046813512
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advanced persistent threats (APTs) pose significant challenges for organizations, leading to data breaches, financial losses, and reputational damage. Existing provenance-based approaches for APT detection often struggle with high false positive rates, a lack of interpretability, and an inability to adapt to evolving system behavior. We introduce RAPID, a novel deep learning-based method for robust APT detection and investigation, leveraging context-aware anomaly detection and alert tracing. By utilizing self-supervised sequence learning and iteratively learned embeddings, our approach effectively adapts to dynamic system behavior. The use of provenance tracing both enriches the alerts and enhances the detection capabilities of our approach. Our extensive evaluation demonstrates RAPID's effectiveness and computational efficiency in real-world scenarios. In addition, RAPID achieves higher precision and recall than state-of-the-art methods, significantly reducing false positives. RAPID integrates contextual information and facilitates a smooth transition from detection to investigation, providing security teams with detailed insights to efficiently address APT threats.
Related papers
- Attention Tracker: Detecting Prompt Injection Attacks in LLMs [62.247841717696765]
Large Language Models (LLMs) have revolutionized various domains but remain vulnerable to prompt injection attacks.
We introduce the concept of the distraction effect, where specific attention heads shift focus from the original instruction to the injected instruction.
We propose Attention Tracker, a training-free detection method that tracks attention patterns on instruction to detect prompt injection attacks.
arXiv Detail & Related papers (2024-11-01T04:05:59Z) - Slot: Provenance-Driven APT Detection through Graph Reinforcement Learning [26.403625710805418]
Advanced Persistent Threats (APTs) represent sophisticated cyberattacks characterized by their ability to remain undetected for extended periods.
We propose Slot, an advanced APT detection approach based on provenance graphs and graph reinforcement learning.
We show Slot's outstanding accuracy, efficiency, adaptability, and robustness in APT detection, with most metrics surpassing state-of-the-art methods.
arXiv Detail & Related papers (2024-10-23T14:28:32Z) - LTRDetector: Exploring Long-Term Relationship for Advanced Persistent Threats Detection [20.360010908574303]
Advanced Persistent Threat (APT) is challenging to detect due to prolonged duration, infrequent occurrence, and adept concealment techniques.
Existing approaches primarily concentrate on the observable traits of attack behaviors, neglecting the intricate relationships formed throughout the persistent attack lifecycle.
We present an innovative APT detection framework named LTRDetector, implementing an end-to-end holistic operation.
arXiv Detail & Related papers (2024-04-04T02:30:51Z) - AI-Based Energy Transportation Safety: Pipeline Radial Threat Estimation
Using Intelligent Sensing System [52.93806509364342]
This paper proposes a radial threat estimation method for energy pipelines based on distributed optical fiber sensing technology.
We introduce a continuous multi-view and multi-domain feature fusion methodology to extract comprehensive signal features.
We incorporate the concept of transfer learning through a pre-trained model, enhancing both recognition accuracy and training efficiency.
arXiv Detail & Related papers (2023-12-18T12:37:35Z) - CrossDF: Improving Cross-Domain Deepfake Detection with Deep Information Decomposition [53.860796916196634]
We propose a Deep Information Decomposition (DID) framework to enhance the performance of Cross-dataset Deepfake Detection (CrossDF)
Unlike most existing deepfake detection methods, our framework prioritizes high-level semantic features over specific visual artifacts.
It adaptively decomposes facial features into deepfake-related and irrelevant information, only using the intrinsic deepfake-related information for real/fake discrimination.
arXiv Detail & Related papers (2023-09-30T12:30:25Z) - Combating Advanced Persistent Threats: Challenges and Solutions [20.81151411772311]
The rise of advanced persistent threats (APTs) has marked a significant cybersecurity challenge.
Provenance graph-based kernel-level auditing has emerged as a promising approach to enhance visibility and traceability.
This paper proposes an efficient and robust APT defense scheme leveraging provenance graphs, including a network-level distributed audit model for cost-effective lateral attack reconstruction.
arXiv Detail & Related papers (2023-09-18T05:46:11Z) - TBDetector:Transformer-Based Detector for Advanced Persistent Threats
with Provenance Graph [17.518551273453888]
We propose TBDetector, a transformer-based advanced persistent threat detection method for APT attack detection.
Provenance graphs provide rich historical information and have the powerful attacks historic correlation ability.
To evaluate the effectiveness of the proposed method, we have conducted experiments on five public datasets.
arXiv Detail & Related papers (2023-04-06T03:08:09Z) - PULL: Reactive Log Anomaly Detection Based On Iterative PU Learning [58.85063149619348]
We propose PULL, an iterative log analysis method for reactive anomaly detection based on estimated failure time windows.
Our evaluation shows that PULL consistently outperforms ten benchmark baselines across three different datasets.
arXiv Detail & Related papers (2023-01-25T16:34:43Z) - Detecting Irregular Network Activity with Adversarial Learning and
Expert Feedback [14.188603782159372]
CAAD employs contrastive learning in an adversarial setup to learn effective representations of normal and anomalous behavior in wireless networks.
We conduct rigorous performance comparisons of CAAD with several state-of-the-art anomaly detection techniques and verify that CAAD yields a mean performance improvement of 92.84%.
arXiv Detail & Related papers (2022-10-01T20:44:14Z) - Surveillance Evasion Through Bayesian Reinforcement Learning [78.79938727251594]
We consider a 2D continuous path planning problem with a completely unknown intensity of random termination.
Those Observers' surveillance intensity is a priori unknown and has to be learned through repetitive path planning.
arXiv Detail & Related papers (2021-09-30T02:29:21Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.