Enabling Efficient Attack Investigation via Human-in-the-Loop Security Analysis
- URL: http://arxiv.org/abs/2211.05403v2
- Date: Tue, 03 Dec 2024 05:18:59 GMT
- Title: Enabling Efficient Attack Investigation via Human-in-the-Loop Security Analysis
- Authors: Xinyu Yang, Haoyuan Liu, Saimon Amanuel Tsegai, Peng Gao,
- Abstract summary: Raptor is a defense system that enables human analysts to effectively analyze large-scale system provenance.<n>ProvQL offers essential primitives for various types of attack analyses.<n> Raptor provides an optimized execution engine for efficient language execution.
- Score: 19.805667450941403
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: System auditing is a vital technique for collecting system call events as system provenance and investigating complex multi-step attacks such as Advanced Persistent Threats. However, existing attack investigation methods struggle to uncover long attack sequences due to the massive volume of system provenance data and their inability to focus on attack-relevant parts. In this paper, we present Raptor, a defense system that enables human analysts to effectively analyze large-scale system provenance to reveal multi-step attack sequences. Raptor introduces an expressive domain-specific language, ProvQL, that offers essential primitives for various types of attack analyses (e.g., attack pattern search, attack dependency tracking) with user-defined constraints, enabling analysts to focus on attack-relevant parts and iteratively sift through the large provenance data. Moreover, Raptor provides an optimized execution engine for efficient language execution. Our extensive evaluations on a wide range of attack scenarios demonstrate the practical effectiveness of Raptor in facilitating timely attack investigation.
Related papers
- TopicAttack: An Indirect Prompt Injection Attack via Topic Transition [71.81906608221038]
Large language models (LLMs) are vulnerable to indirect prompt injection attacks.<n>We propose TopicAttack, which prompts the LLM to generate a fabricated transition prompt that gradually shifts the topic toward the injected instruction.<n>We find that a higher injected-to-original attention ratio leads to a greater success probability, and our method achieves a much higher ratio than the baseline methods.
arXiv Detail & Related papers (2025-07-18T06:23:31Z) - CLIProv: A Contrastive Log-to-Intelligence Multimodal Approach for Threat Detection and Provenance Analysis [6.680853786327484]
This paper introduces CLIProv, a novel approach for detecting threat behaviors in a host system.<n>By leveraging attack pattern information in threat intelligence, CLIProv identifies TTPs and generates complete and concise attack scenarios.<n>Compared to state-of-the-art methods, CLIProv achieves higher precision and significantly improved detection efficiency.
arXiv Detail & Related papers (2025-07-12T04:20:00Z) - A Survey on Model Extraction Attacks and Defenses for Large Language Models [55.60375624503877]
Model extraction attacks pose significant security threats to deployed language models.<n>This survey provides a comprehensive taxonomy of extraction attacks and defenses, categorizing attacks into functionality extraction, training data extraction, and prompt-targeted attacks.<n>We examine defense mechanisms organized into model protection, data privacy protection, and prompt-targeted strategies, evaluating their effectiveness across different deployment scenarios.
arXiv Detail & Related papers (2025-06-26T22:02:01Z) - An In-kernel Forensics Engine for Investigating Evasive Attacks [0.28894038270224864]
This paper introduces LASE, an open-source Low-Artifact Forensics Engine to perform threat analysis and forensics in Windows operating system.<n>LASE augments current analysis tools by providing detailed, system-wide monitoring capabilities while minimizing detectable artifacts.
arXiv Detail & Related papers (2025-05-10T03:40:17Z) - Exploring Answer Set Programming for Provenance Graph-Based Cyber Threat Detection: A Novel Approach [4.302577059401172]
Provenance graphs are useful tools for representing system-level activities in cybersecurity.
This paper presents a novel approach using ASP to model and analyze provenance graphs.
arXiv Detail & Related papers (2025-01-24T14:57:27Z) - HijackRAG: Hijacking Attacks against Retrieval-Augmented Large Language Models [18.301965456681764]
We reveal a novel vulnerability, the retrieval prompt hijack attack (HijackRAG)
HijackRAG enables attackers to manipulate the retrieval mechanisms of RAG systems by injecting malicious texts into the knowledge database.
We propose both black-box and white-box attack strategies tailored to different levels of the attacker's knowledge.
arXiv Detail & Related papers (2024-10-30T09:15:51Z) - Slot: Provenance-Driven APT Detection through Graph Reinforcement Learning [24.84110719035862]
Advanced Persistent Threats (APTs) represent sophisticated cyberattacks characterized by their ability to remain undetected for extended periods.
We propose Slot, an advanced APT detection approach based on provenance graphs and graph reinforcement learning.
We show Slot's outstanding accuracy, efficiency, adaptability, and robustness in APT detection, with most metrics surpassing state-of-the-art methods.
arXiv Detail & Related papers (2024-10-23T14:28:32Z) - AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - Corpus Poisoning via Approximate Greedy Gradient Descent [48.5847914481222]
We propose Approximate Greedy Gradient Descent, a new attack on dense retrieval systems based on the widely used HotFlip method for generating adversarial passages.
We show that our method achieves a high attack success rate on several datasets and using several retrievers, and can generalize to unseen queries and new domains.
arXiv Detail & Related papers (2024-06-07T17:02:35Z) - It Is Time To Steer: A Scalable Framework for Analysis-driven Attack Graph Generation [50.06412862964449]
Attack Graph (AG) represents the best-suited solution to support cyber risk assessment for multi-step attacks on computer networks.
Current solutions propose to address the generation problem from the algorithmic perspective and postulate the analysis only after the generation is complete.
This paper rethinks the classic AG analysis through a novel workflow in which the analyst can query the system anytime.
arXiv Detail & Related papers (2023-12-27T10:44:58Z) - A Hierarchical Security Events Correlation Model for Real-time Cyber Threat Detection and Response [0.0]
We develop a novel hierarchical event correlation model that promises to reduce the number of alerts issued by an Intrusion Detection System.
The proposed model takes the best of features from similarity and graph-based correlation techniques to deliver an ensemble capability not possible by either approach separately.
The model is implemented as a proof of concept with experiments run on the DARPA 99 Intrusion detection set.
arXiv Detail & Related papers (2023-12-02T20:07:40Z) - Token-Level Adversarial Prompt Detection Based on Perplexity Measures
and Contextual Information [67.78183175605761]
Large Language Models are susceptible to adversarial prompt attacks.
This vulnerability underscores a significant concern regarding the robustness and reliability of LLMs.
We introduce a novel approach to detecting adversarial prompts at a token level.
arXiv Detail & Related papers (2023-11-20T03:17:21Z) - Investigative Pattern Detection Framework for Counterterrorism [0.09999629695552192]
Automated tools are required to extract information to respond queries from analysts, continually scan new information, integrate them with past events, and then alert about emerging threats.
We address challenges in investigative pattern detection and develop an Investigative Pattern Detection Framework for Counterterrorism (INSPECT)
The framework integrates numerous computing tools that include machine learning techniques to identify behavioral indicators and graph pattern matching techniques to detect risk profiles/groups.
arXiv Detail & Related papers (2023-10-30T00:45:05Z) - Streamlining Attack Tree Generation: A Fragment-Based Approach [39.157069600312774]
We present a novel fragment-based attack graph generation approach that utilizes information from publicly available information security databases.
We also propose a domain-specific language for attack modeling, which we employ in the proposed attack graph generation approach.
arXiv Detail & Related papers (2023-10-01T12:41:38Z) - Kairos: Practical Intrusion Detection and Investigation using
Whole-system Provenance [4.101641763092759]
Provenance graphs are structured audit logs that describe the history of a system's execution.
We identify four common dimensions that drive the development of provenance-based intrusion detection systems (PIDSes)
We present KAIROS, the first PIDS that simultaneously satisfies the desiderata in all four dimensions.
arXiv Detail & Related papers (2023-08-09T16:04:55Z) - Unveiling Vulnerabilities in Interpretable Deep Learning Systems with
Query-Efficient Black-box Attacks [16.13790238416691]
Interpretable Deep Learning Systems (IDLSes) are designed to make the system more transparent and explainable.
We propose a novel microbial genetic algorithm-based black-box attack against IDLSes that requires no prior knowledge of the target model and its interpretation model.
arXiv Detail & Related papers (2023-07-21T21:09:54Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - Unsupervised Anomaly Detectors to Detect Intrusions in the Current
Threat Landscape [0.11470070927586014]
We show that Isolation Forests, One-Class Support Vector Machines and Self-Organizing Maps are more effective than their counterparts for intrusion detection.
We detail how attacks with unstable, distributed or non-repeatable behavior as Fuzzing, Worms and Botnets are more difficult to detect.
arXiv Detail & Related papers (2020-12-21T14:06:58Z) - No Need to Know Physics: Resilience of Process-based Model-free Anomaly
Detection for Industrial Control Systems [95.54151664013011]
We present a novel framework to generate adversarial spoofing signals that violate physical properties of the system.
We analyze four anomaly detectors published at top security conferences.
arXiv Detail & Related papers (2020-12-07T11:02:44Z) - Enabling Efficient Cyber Threat Hunting With Cyber Threat Intelligence [94.94833077653998]
ThreatRaptor is a system that facilitates threat hunting in computer systems using open-source Cyber Threat Intelligence (OSCTI)
It extracts structured threat behaviors from unstructured OSCTI text and uses a concise and expressive domain-specific query language, TBQL, to hunt for malicious system activities.
Evaluations on a broad set of attack cases demonstrate the accuracy and efficiency of ThreatRaptor in practical threat hunting.
arXiv Detail & Related papers (2020-10-26T14:54:01Z) - Investigating Robustness of Adversarial Samples Detection for Automatic
Speaker Verification [78.51092318750102]
This work proposes to defend ASV systems against adversarial attacks with a separate detection network.
A VGG-like binary classification detector is introduced and demonstrated to be effective on detecting adversarial samples.
arXiv Detail & Related papers (2020-06-11T04:31:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.