Getting pwn'd by AI: Penetration Testing with Large Language Models
- URL: http://arxiv.org/abs/2308.00121v3
- Date: Thu, 17 Aug 2023 12:26:40 GMT
- Title: Getting pwn'd by AI: Penetration Testing with Large Language Models
- Authors: Andreas Happe, J\"urgen Cito
- Abstract summary: This paper explores the potential usage of large-language models, such as GPT3.5, to augment penetration testers with AI sparring partners.
We explore the feasibility of supplementing penetration testers with AI models for two distinct use cases: high-level task planning for security testing assignments and low-level vulnerability hunting within a vulnerable virtual machine.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The field of software security testing, more specifically penetration
testing, is an activity that requires high levels of expertise and involves
many manual testing and analysis steps. This paper explores the potential usage
of large-language models, such as GPT3.5, to augment penetration testers with
AI sparring partners. We explore the feasibility of supplementing penetration
testers with AI models for two distinct use cases: high-level task planning for
security testing assignments and low-level vulnerability hunting within a
vulnerable virtual machine. For the latter, we implemented a closed-feedback
loop between LLM-generated low-level actions with a vulnerable virtual machine
(connected through SSH) and allowed the LLM to analyze the machine state for
vulnerabilities and suggest concrete attack vectors which were automatically
executed within the virtual machine. We discuss promising initial results,
detail avenues for improvement, and close deliberating on the ethics of
providing AI-based sparring partners.
Related papers
- Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.
We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.
We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - VulnBot: Autonomous Penetration Testing for A Multi-Agent Collaborative Framework [4.802551205178858]
Existing large language model (LLM)-assisted or automated penetration testing approaches often suffer from inefficiencies.
VulnBot decomposes complex tasks into three specialized phases: reconnaissance, scanning, and exploitation.
Key design features include role specialization, penetration path planning, inter-agent communication, and generative penetration behavior.
arXiv Detail & Related papers (2025-01-23T06:33:05Z) - The BrowserGym Ecosystem for Web Agent Research [151.90034093362343]
BrowserGym ecosystem addresses the growing need for efficient evaluation and benchmarking of web agents.
We conduct the first large-scale, multi-benchmark web agent experiment.
Results highlight a large discrepancy between OpenAI and Anthropic's latests models.
arXiv Detail & Related papers (2024-12-06T23:43:59Z) - PentestAgent: Incorporating LLM Agents to Automated Penetration Testing [6.815381197173165]
Manual penetration testing is time-consuming and expensive.
Recent advancements in large language models (LLMs) offer new opportunities for enhancing penetration testing.
We propose PentestAgent, a novel LLM-based automated penetration testing framework.
arXiv Detail & Related papers (2024-11-07T21:10:39Z) - AutoPT: How Far Are We from the End2End Automated Web Penetration Testing? [54.65079443902714]
We introduce AutoPT, an automated penetration testing agent based on the principle of PSM driven by LLMs.
Our results show that AutoPT outperforms the baseline framework ReAct on the GPT-4o mini model.
arXiv Detail & Related papers (2024-11-02T13:24:30Z) - CIPHER: Cybersecurity Intelligent Penetration-testing Helper for Ethical Researcher [1.6652242654250329]
We develop CIPHER (Cybersecurity Intelligent Penetration-testing Helper for Ethical Researchers), a large language model specifically trained to assist in penetration testing tasks.
We trained CIPHER using over 300 high-quality write-ups of vulnerable machines, hacking techniques, and documentation of open-source penetration testing tools.
We introduce the Findings, Action, Reasoning, and Results (FARR) Flow augmentation, a novel method to augment penetration testing write-ups to establish a fully automated pentesting simulation benchmark.
arXiv Detail & Related papers (2024-08-21T14:24:04Z) - Test Oracle Automation in the era of LLMs [52.69509240442899]
Large Language Models (LLMs) have demonstrated remarkable proficiency in tackling diverse software testing tasks.
This paper aims to enable discussions on the potential of using LLMs for test oracle automation, along with the challenges that may emerge during the generation of various types of oracles.
arXiv Detail & Related papers (2024-05-21T13:19:10Z) - A Preliminary Study on Using Large Language Models in Software
Pentesting [2.0551676463612636]
Large language models (LLM) are perceived to offer promising potentials for automating security tasks.
We investigate the use of LLMs in software pentesting, where the main task is to automatically identify software security vulnerabilities in source code.
arXiv Detail & Related papers (2024-01-30T21:42:59Z) - PentestGPT: An LLM-empowered Automatic Penetration Testing Tool [20.449761406790415]
Large Language Models (LLMs) have shown significant advancements in various domains.
We evaluate the performance of LLMs on real-world penetration testing tasks using a robust benchmark created from test machines with platforms.
We introduce PentestGPT, an LLM-empowered automatic penetration testing tool.
arXiv Detail & Related papers (2023-08-13T14:35:50Z) - Can AI-Generated Text be Reliably Detected? [50.95804851595018]
Large Language Models (LLMs) perform impressively well in various applications.
The potential for misuse of these models in activities such as plagiarism, generating fake news, and spamming has raised concern about their responsible use.
We stress-test the robustness of these AI text detectors in the presence of an attacker.
arXiv Detail & Related papers (2023-03-17T17:53:19Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.