An In-kernel Forensics Engine for Investigating Evasive Attacks
- URL: http://arxiv.org/abs/2505.06498v2
- Date: Sun, 18 May 2025 16:54:13 GMT
- Title: An In-kernel Forensics Engine for Investigating Evasive Attacks
- Authors: Javad Zandi, Lalchandra Rampersaud, Amin Kharraz,
- Abstract summary: This paper introduces LASE, an open-source Low-Artifact Forensics Engine to perform threat analysis and forensics in Windows operating system.<n>LASE augments current analysis tools by providing detailed, system-wide monitoring capabilities while minimizing detectable artifacts.
- Score: 0.28894038270224864
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Over the years, adversarial attempts against critical services have become more effective and sophisticated in launching low-profile attacks. This trend has always been concerning. However, an even more alarming trend is the increasing difficulty of collecting relevant evidence about these attacks and the involved threat actors in the early stages before significant damage is done. This issue puts defenders at a significant disadvantage, as it becomes exceedingly difficult to understand the attack details and formulate an appropriate response. Developing robust forensics tools to collect evidence about modern threats has never been easy. One main challenge is to provide a robust trade-off between achieving sufficient visibility while leaving minimal detectable artifacts. This paper will introduce LASE, an open-source Low-Artifact Forensics Engine to perform threat analysis and forensics in Windows operating system. LASE augments current analysis tools by providing detailed, system-wide monitoring capabilities while minimizing detectable artifacts. We designed multiple deployment scenarios, showing LASE's potential in evidence gathering and threat reasoning in a real-world setting. By making LASE and its execution trace data available to the broader research community, this work encourages further exploration in the field by reducing the engineering costs for threat analysis and building a longitudinal behavioral analysis catalog for diverse security domains.
Related papers
- From Alerts to Intelligence: A Novel LLM-Aided Framework for Host-based Intrusion Detection [16.59938864299474]
Large Language Models (LLMs) have great potentials to advance the state of host-based intrusion detection system (HIDS)<n>LLMs have extensive knowledge of attack techniques and their ability to detect anomalies through semantic analysis.<n>In this work, we explore the direction of building a customized LLM pipeline for HIDS and develop a system named SHIELD.
arXiv Detail & Related papers (2025-07-15T00:24:53Z) - Benchmarking Misuse Mitigation Against Covert Adversaries [80.74502950627736]
Existing language model safety evaluations focus on overt attacks and low-stakes tasks.<n>We develop Benchmarks for Stateful Defenses (BSD), a data generation pipeline that automates evaluations of covert attacks and corresponding defenses.<n>Our evaluations indicate that decomposition attacks are effective misuse enablers, and highlight stateful defenses as a countermeasure.
arXiv Detail & Related papers (2025-06-06T17:33:33Z) - Lazarus Group Targets Crypto-Wallets and Financial Data while employing new Tradecrafts [0.0]
This report presents a comprehensive analysis of a malicious software sample, detailing its architecture, behavioral characteristics, and underlying intent.<n>The malware core functionalities, including persistence mechanisms, command-and-control communication, and data exfiltration routines, are identified.<n>This malware analysis report not only reconstructs past adversary actions but also establishes a robust foundation for anticipating and mitigating future attacks.
arXiv Detail & Related papers (2025-05-27T20:13:29Z) - SHIELD: APT Detection and Intelligent Explanation Using LLM [22.944352324963546]
Advanced persistent threats (APTs) are sophisticated cyber attacks that can remain undetected for extended periods.<n>Existing provenance-based attack detection methods often lack interpretability and suffer from high false positive rates.<n>We introduce SHIELD, a novel approach that combines statistical anomaly detection and graph-based analysis with the contextual analysis capabilities of large language models.
arXiv Detail & Related papers (2025-02-04T14:20:51Z) - Illusions of Relevance: Using Content Injection Attacks to Deceive Retrievers, Rerankers, and LLM Judges [52.96987928118327]
We find that embedding models for retrieval, rerankers, and large language model (LLM) relevance judges are vulnerable to content injection attacks.<n>We identify two primary threats: (1) inserting unrelated or harmful content within passages that still appear deceptively "relevant", and (2) inserting entire queries or key query terms into passages to boost their perceived relevance.<n>Our study systematically examines the factors that influence an attack's success, such as the placement of injected content and the balance between relevant and non-relevant material.
arXiv Detail & Related papers (2025-01-30T18:02:15Z) - Robust Synthetic Data-Driven Detection of Living-Off-the-Land Reverse Shells [14.710331873072146]
Living-off-the-land (LOTL) techniques pose a significant challenge to security operations.<n>We present a robust augmentation framework for cyber defense systems as Security Information and Event Management (SIEM) solutions.
arXiv Detail & Related papers (2024-02-28T13:49:23Z) - Towards more Practical Threat Models in Artificial Intelligence Security [66.67624011455423]
Recent works have identified a gap between research and practice in artificial intelligence security.
We revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice.
arXiv Detail & Related papers (2023-11-16T16:09:44Z) - Investigative Pattern Detection Framework for Counterterrorism [0.09999629695552192]
Automated tools are required to extract information to respond queries from analysts, continually scan new information, integrate them with past events, and then alert about emerging threats.
We address challenges in investigative pattern detection and develop an Investigative Pattern Detection Framework for Counterterrorism (INSPECT)
The framework integrates numerous computing tools that include machine learning techniques to identify behavioral indicators and graph pattern matching techniques to detect risk profiles/groups.
arXiv Detail & Related papers (2023-10-30T00:45:05Z) - On the Security Risks of Knowledge Graph Reasoning [71.64027889145261]
We systematize the security threats to KGR according to the adversary's objectives, knowledge, and attack vectors.
We present ROAR, a new class of attacks that instantiate a variety of such threats.
We explore potential countermeasures against ROAR, including filtering of potentially poisoning knowledge and training with adversarially augmented queries.
arXiv Detail & Related papers (2023-05-03T18:47:42Z) - Enabling Efficient Attack Investigation via Human-in-the-Loop Security Analysis [19.805667450941403]
Raptor is a defense system that enables human analysts to effectively analyze large-scale system provenance.<n>ProvQL offers essential primitives for various types of attack analyses.<n> Raptor provides an optimized execution engine for efficient language execution.
arXiv Detail & Related papers (2022-11-10T08:13:19Z) - Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against
Fact-Verification Systems [80.3811072650087]
We show that it is possible to subtly modify claim-salient snippets in the evidence and generate diverse and claim-aligned evidence.
The attacks are also robust against post-hoc modifications of the claim.
These attacks can have harmful implications on the inspectable and human-in-the-loop usage scenarios.
arXiv Detail & Related papers (2022-09-07T13:39:24Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Agile Approach for IT Forensics Management [0.0]
This paper presents the novel flower model, which uses agile methods and forms a new forensic management approach.
In the forensic investigation of such attacks, big data problems have to be solved due to the amount of data that needs to be analyzed.
arXiv Detail & Related papers (2020-07-08T13:48:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.