AttackMate: Realistic Emulation and Automation of Cyber Attack Scenarios Across the Kill Chain
- URL: http://arxiv.org/abs/2601.14108v1
- Date: Tue, 20 Jan 2026 16:04:59 GMT
- Title: AttackMate: Realistic Emulation and Automation of Cyber Attack Scenarios Across the Kill Chain
- Authors: Max Landauer, Wolfgang Hotwagner, Thorina Boenke, Florian Skopik, Markus Wurzenberger,
- Abstract summary: Adversary emulation tools facilitate scripting and automated execution of cyber attack chains.<n>This paper introduces AttackMate, an open-source attack scripting language and execution engine designed to mimic behavior patterns of actual attackers.
- Score: 2.882241577423132
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Adversary emulation tools facilitate scripting and automated execution of cyber attack chains, thereby reducing costs and manual expert effort required for security testing, cyber exercises, and intrusion detection research. However, due to the fact that existing tools typically rely on agents installed on target systems, they leave suspicious traces that make it easy to distinguish their activities from those of real human attackers. Moreover, these tools often lack relevant capabilities, such as handling of interactive prompts, and are unsuitable for emulating specific stages of the kill chain, such as initial access. This paper thus introduces AttackMate, an open-source attack scripting language and execution engine designed to mimic behavior patterns of actual attackers. We validate the tool in a case study covering common attack steps including privilege escalation, information gathering, and lateral movement. Our results indicate that log artifacts resulting from AttackMate's activities resemble those produced by human attackers more closely than those generated by standard adversary emulation tools.
Related papers
- SkillJect: Automating Stealthy Skill-Based Prompt Injection for Coding Agents with Trace-Driven Closed-Loop Refinement [120.52289344734415]
We propose an automated framework for stealthy prompt injection tailored to agent skills.<n>The framework forms a closed loop with three agents: an Attack Agent that synthesizes injection skills under explicit stealth constraints, a Code Agent that executes tasks using the injected skills and an Evaluate Agent that logs action traces.<n>Our method consistently achieves high attack success rates under realistic settings.
arXiv Detail & Related papers (2026-02-15T16:09:48Z) - SCOUT: A Defense Against Data Poisoning Attacks in Fine-Tuned Language Models [11.304852987259041]
This paper introduces three novel contextually-aware attack scenarios that exploit domain-specific knowledge and semantic plausibility.<n>We present textbfSCOUT (Saliency-based Classification Of Untrusted Tokens), a novel defense framework that identifies backdoor triggers through token-level saliency analysis.
arXiv Detail & Related papers (2025-12-10T17:25:55Z) - Malice in Agentland: Down the Rabbit Hole of Backdoors in the AI Supply Chain [82.98626829232899]
Fine-tuning AI agents on data from their own interactions introduces a critical security vulnerability within the AI supply chain.<n>We show that adversaries can easily poison the data collection pipeline to embed hard-to-detect backdoors.
arXiv Detail & Related papers (2025-10-03T12:47:21Z) - Cuckoo Attack: Stealthy and Persistent Attacks Against AI-IDE [64.47951172662745]
Cuckoo Attack is a novel attack that achieves stealthy and persistent command execution by embedding malicious payloads into configuration files.<n>We formalize our attack paradigm into two stages, including initial infection and persistence.<n>We contribute seven actionable checkpoints for vendors to evaluate their product security.
arXiv Detail & Related papers (2025-09-19T04:10:52Z) - Backdoor Attacks and Defenses in Computer Vision Domain: A Survey [12.384887829851834]
Backdoor attacks embed hidden, controllable behaviors into machine-learning models.<n>This survey reviews the rapidly growing literature on backdoor attacks and defenses in the computer-vision domain.
arXiv Detail & Related papers (2025-09-09T08:38:05Z) - Modeling Behavioral Preferences of Cyber Adversaries Using Inverse Reinforcement Learning [4.5456862813416565]
This paper presents a holistic approach to attacker preference modeling from system-level audit logs using inverse reinforcement learning (IRL)<n>We learn the behavioral preferences of cyber adversaries from forensics data on their tools and techniques.<n>Our results demonstrate for the first time that low-level forensics data can automatically reveal an adversary's subjective preferences.
arXiv Detail & Related papers (2025-05-02T18:20:14Z) - Prompt Injection Attack to Tool Selection in LLM Agents [60.95349602772112]
A popular approach follows a two-step process - emphretrieval and emphselection - to pick the most appropriate tool from a tool library for a given task.<n>In this work, we introduce textitToolHijacker, a novel prompt injection attack targeting tool selection in no-box scenarios.
arXiv Detail & Related papers (2025-04-28T13:36:43Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z) - Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware
Detection [8.551227913472632]
We propose a new attack approach, named mixture of attacks, to perturb a malware example without ruining its malicious functionality.
This naturally leads to a new instantiation of adversarial training, which is further geared to enhancing the ensemble of deep neural networks.
We evaluate defenses using Android malware detectors against 26 different attacks upon two practical datasets.
arXiv Detail & Related papers (2020-06-30T05:56:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.