Design and Development of an Intelligent LLM-based LDAP Honeypot
- URL: http://arxiv.org/abs/2509.16682v1
- Date: Sat, 20 Sep 2025 13:16:07 GMT
- Title: Design and Development of an Intelligent LLM-based LDAP Honeypot
- Authors: Javier Jiménez-Román, Florina Almenares-Mendoza, Alfonso Sánchez-Macián,
- Abstract summary: Honeypots have proven their value, although they have traditionally been limited by rigidity and configuration complexity.<n>The proposed solution aims to provide a flexible and realistic tool capable of convincingly interacting with attackers.
- Score: 0.3823356975862005
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cybersecurity threats continue to increase, with a growing number of previously unknown attacks each year targeting both large corporations and smaller entities. This scenario demands the implementation of advanced security measures, not only to mitigate damage but also to anticipate emerging attack trends. In this context, deception tools have become a key strategy, enabling the detection, deterrence, and deception of potential attackers while facilitating the collection of information about their tactics and methods. Among these tools, honeypots have proven their value, although they have traditionally been limited by rigidity and configuration complexity, hindering their adaptability to dynamic scenarios. The rise of artificial intelligence, and particularly general-purpose Large Language Models (LLMs), is driving the development of new deception solutions capable of offering greater adaptability and ease of use. This work proposes the design and implementation of an LLM-based honeypot to simulate an LDAP server, a critical protocol present in most organizations due to its central role in identity and access management. The proposed solution aims to provide a flexible and realistic tool capable of convincingly interacting with attackers, thereby contributing to early detection and threat analysis while enhancing the defensive capabilities of infrastructures against intrusions targeting this service.
Related papers
- Anticipating Adversary Behavior in DevSecOps Scenarios through Large Language Models [2.2192966937452376]
This work proposes integrating the Security Chaos Engineering (SCE) methodology with a new LLM-based flow to automate the creation of attack defense trees.<n>This will enable teams to stay one step ahead of attackers and implement previously unconsidered defenses.<n>Further detailed information about the experiment performed, along with the steps to replicate it, can be found in the following repository.
arXiv Detail & Related papers (2026-02-15T11:43:04Z) - Techniques of Modern Attacks [51.56484100374058]
Advanced Persistent Threats (APTs) represent a complex method of attack aimed at specific targets.<n>I will investigate both the attack life cycle and cutting-edge detection and defense strategies proposed in recent academic research.<n>I aim to highlight the strengths and limitations of each approach and propose more adaptive APT mitigation strategies.
arXiv Detail & Related papers (2026-01-19T22:15:25Z) - Adversarial Attack-Defense Co-Evolution for LLM Safety Alignment via Tree-Group Dual-Aware Search and Optimization [51.12422886183246]
Large Language Models (LLMs) have developed rapidly in web services, delivering unprecedented capabilities while amplifying societal risks.<n>Existing works tend to focus on either isolated jailbreak attacks or static defenses, neglecting the dynamic interplay between evolving threats and safeguards in real-world web contexts.<n>We propose ACE-Safety, a novel framework that jointly optimize attack and defense models by seamlessly integrating two key innovative procedures.
arXiv Detail & Related papers (2025-11-24T15:23:41Z) - Exploiting Web Search Tools of AI Agents for Data Exfiltration [0.46664938579243564]
Large language models (LLMs) are now routinely used to execute complex tasks, from natural language processing to dynamic like web searches.<n>The usage of tool-calling and Retrieval Augmented Generation (RAG) allows LLMs to process and retrieve sensitive corporate data, amplifying both their functionality and vulnerability to abuse.<n>We analyze how susceptible current LLMs are to indirect prompt injection attacks, which parameters, including model size and manufacturer, shape their vulnerability, and which attack methods remain most effective.
arXiv Detail & Related papers (2025-10-10T07:39:01Z) - Securing AI Agents: Implementing Role-Based Access Control for Industrial Applications [0.0]
In industrial settings, AI agents are transforming operations by enhancing decision-making, predictive maintenance, and process optimization.<n>Despite these advancements, AI agents remain vulnerable to security threats, including prompt injection attacks.<n>This paper proposes a framework for integrating Role-Based Access Control (RBAC) into AI agents, providing a robust security guardrail.
arXiv Detail & Related papers (2025-09-14T20:58:08Z) - A Systematic Survey of Model Extraction Attacks and Defenses: State-of-the-Art and Perspectives [65.3369988566853]
Recent studies have demonstrated that adversaries can replicate a target model's functionality.<n>Model Extraction Attacks pose threats to intellectual property, privacy, and system security.<n>We propose a novel taxonomy that classifies MEAs according to attack mechanisms, defense approaches, and computing environments.
arXiv Detail & Related papers (2025-08-20T19:49:59Z) - Searching for Privacy Risks in LLM Agents via Simulation [61.229785851581504]
We present a search-based framework that alternates between improving attack and defense strategies through the simulation of privacy-critical agent interactions.<n>We find that attack strategies escalate from direct requests to sophisticated tactics, such as impersonation and consent forgery.<n>The discovered attacks and defenses transfer across diverse scenarios and backbone models, demonstrating strong practical utility for building privacy-aware agents.
arXiv Detail & Related papers (2025-08-14T17:49:09Z) - A Survey on Autonomy-Induced Security Risks in Large Model-Based Agents [45.53643260046778]
Recent advances in large language models (LLMs) have catalyzed the rise of autonomous AI agents.<n>These large-model agents mark a paradigm shift from static inference systems to interactive, memory-augmented entities.
arXiv Detail & Related papers (2025-06-30T13:34:34Z) - Progent: Programmable Privilege Control for LLM Agents [46.31581986508561]
We introduce Progent, the first privilege control framework to secure Large Language Models agents.<n>Progent enforces security at the tool level by restricting agents to performing tool calls necessary for user tasks while blocking potentially malicious ones.<n>Thanks to our modular design, integrating Progent does not alter agent internals and only requires minimal changes to the existing agent implementation.
arXiv Detail & Related papers (2025-04-16T01:58:40Z) - Intelligent IoT Attack Detection Design via ODLLM with Feature Ranking-based Knowledge Base [0.964942474860411]
Internet of Things (IoT) devices have introduced significant cybersecurity challenges.<n>Traditional machine learning (ML) techniques often fall short in detecting such attacks due to the complexity of blended and evolving patterns.<n>We propose a novel framework leveraging On-Device Large Language Models (ODLLMs) augmented with fine-tuning and knowledge base (KB) integration for intelligent IoT network attack detection.
arXiv Detail & Related papers (2025-03-27T16:41:57Z) - Cyber Defense Reinvented: Large Language Models as Threat Intelligence Copilots [36.809323735351825]
CYLENS is a cyber threat intelligence copilot powered by large language models (LLMs)<n>CYLENS is designed to assist security professionals throughout the entire threat management lifecycle.<n>It supports threat attribution, contextualization, detection, correlation, prioritization, and remediation.
arXiv Detail & Related papers (2025-02-28T07:16:09Z) - Why Are Web AI Agents More Vulnerable Than Standalone LLMs? A Security Analysis [47.34614558636679]
This study investigates the underlying factors that contribute to the increased vulnerability of Web AI agents.<n>We identify three critical factors that amplify the vulnerability of Web AI agents; (1) embedding user goals into the system prompt, (2) multi-step action generation, and (3) observational capabilities.
arXiv Detail & Related papers (2025-02-27T18:56:26Z) - Commercial LLM Agents Are Already Vulnerable to Simple Yet Dangerous Attacks [88.84977282952602]
A high volume of recent ML security literature focuses on attacks against aligned large language models (LLMs)<n>In this paper, we analyze security and privacy vulnerabilities that are unique to LLM agents.<n>We conduct a series of illustrative attacks on popular open-source and commercial agents, demonstrating the immediate practical implications of their vulnerabilities.
arXiv Detail & Related papers (2025-02-12T17:19:36Z) - AI-based Attacker Models for Enhancing Multi-Stage Cyberattack Simulations in Smart Grids Using Co-Simulation Environments [1.4563527353943984]
The transition to smart grids has increased the vulnerability of electrical power systems to advanced cyber threats.<n>We propose a co-simulation framework that employs an autonomous agent to execute modular cyberattacks.<n>Our approach offers a flexible, versatile source for data generation, aiding in faster prototyping and reducing development resources and time.
arXiv Detail & Related papers (2024-12-05T08:56:38Z) - RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content [62.685566387625975]
Current mitigation strategies, while effective, are not resilient under adversarial attacks.
This paper introduces Resilient Guardrails for Large Language Models (RigorLLM), a novel framework designed to efficiently moderate harmful and unsafe inputs.
arXiv Detail & Related papers (2024-03-19T07:25:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.