RASP for LSASS: Preventing Mimikatz-Related Attacks
- URL: http://arxiv.org/abs/2401.00316v1
- Date: Sat, 30 Dec 2023 20:37:37 GMT
- Title: RASP for LSASS: Preventing Mimikatz-Related Attacks
- Authors: Anna Revazova, Igor Korkin,
- Abstract summary: The Windows authentication infrastructure relies on the Local Security Authority system, with its integral component being lsass.exe.
This framework is not impervious, presenting vulnerabilities that attract threat actors with malicious intent.
By exploiting documented vulnerabilities sourced from the CVE database or leveraging sophisticated tools such as mimikatz, adversaries can successfully compromise user password-address information.
- Score: 2.5782420501870296
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Windows authentication infrastructure relies on the Local Security Authority (LSA) system, with its integral component being lsass.exe. Regrettably, this framework is not impervious, presenting vulnerabilities that attract threat actors with malicious intent. By exploiting documented vulnerabilities sourced from the CVE database or leveraging sophisticated tools such as mimikatz, adversaries can successfully compromise user password-address information. In this comprehensive analysis, we delve into proactive measures aimed at fortifying the local authentication subsystem against potential threats. Moreover, we present empirical evidence derived from practical assessments of various defensive methodologies, including those articulated previously. This examination not only underscores the importance of proactive security measures but also assesses the practical efficacy of these strategies in real-world contexts.
Related papers
- Commercial LLM Agents Are Already Vulnerable to Simple Yet Dangerous Attacks [88.84977282952602]
A high volume of recent ML security literature focuses on attacks against aligned large language models (LLMs)
In this paper, we analyze security and privacy vulnerabilities that are unique to LLM agents.
We conduct a series of illustrative attacks on popular open-source and commercial agents, demonstrating the immediate practical implications of their vulnerabilities.
arXiv Detail & Related papers (2025-02-12T17:19:36Z) - Autonomous Identity-Based Threat Segmentation in Zero Trust Architectures [4.169915659794567]
Zero Trust Architectures (ZTA) fundamentally redefine network security by adopting a "trust nothing, verify everything" approach.
This research applies the proposed AI-driven, autonomous, identity-based threat segmentation in ZTA.
arXiv Detail & Related papers (2025-01-10T15:35:02Z) - Securing Legacy Communication Networks via Authenticated Cyclic Redundancy Integrity Check [98.34702864029796]
We propose Authenticated Cyclic Redundancy Integrity Check (ACRIC)
ACRIC preserves backward compatibility without requiring additional hardware and is protocol agnostic.
We show that ACRIC offers robust security with minimal transmission overhead ( 1 ms)
arXiv Detail & Related papers (2024-11-21T18:26:05Z) - Excavating Vulnerabilities Lurking in Multi-Factor Authentication Protocols: A Systematic Security Analysis [2.729532849571912]
Single-factor authentication (SFA) protocols are often bypassed by side-channel and other attack techniques.
To alleviate this problem, multi-factor authentication (MFA) protocols have been widely adopted recently.
arXiv Detail & Related papers (2024-07-29T23:37:38Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - The Art of Defending: A Systematic Evaluation and Analysis of LLM
Defense Strategies on Safety and Over-Defensiveness [56.174255970895466]
Large Language Models (LLMs) play an increasingly pivotal role in natural language processing applications.
This paper presents Safety and Over-Defensiveness Evaluation (SODE) benchmark.
arXiv Detail & Related papers (2023-12-30T17:37:06Z) - Benchmarking and Defending Against Indirect Prompt Injection Attacks on Large Language Models [79.0183835295533]
We introduce the first benchmark for indirect prompt injection attacks, named BIPIA, to assess the risk of such vulnerabilities.
Our analysis identifies two key factors contributing to their success: LLMs' inability to distinguish between informational context and actionable instructions, and their lack of awareness in avoiding the execution of instructions within external content.
We propose two novel defense mechanisms-boundary awareness and explicit reminder-to address these vulnerabilities in both black-box and white-box settings.
arXiv Detail & Related papers (2023-12-21T01:08:39Z) - Blockchain-based Zero Trust on the Edge [5.323279718522213]
This paper proposes a novel approach based on Zero Trust Architecture (ZTA) extended with blockchain to further enhance security.
The blockchain component serves as an immutable database for storing users' requests and is used to verify trustworthiness by analyzing and identifying potentially malicious user activities.
We discuss the framework, processes of the approach, and the experiments carried out on a testbed to validate its feasibility and applicability in the smart city context.
arXiv Detail & Related papers (2023-11-28T12:43:21Z) - Automated Security Assessment for the Internet of Things [6.690766107366799]
We propose an automated security assessment framework for IoT networks.
Our framework first leverages machine learning and natural language processing to analyze vulnerability descriptions.
This security model automatically assesses the security of the IoT network by capturing potential attack paths.
arXiv Detail & Related papers (2021-09-09T04:42:24Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.