Offensive AI: Enhancing Directory Brute-forcing Attack with the Use of Language Models
- URL: http://arxiv.org/abs/2404.14138v1
- Date: Mon, 22 Apr 2024 12:40:38 GMT
- Title: Offensive AI: Enhancing Directory Brute-forcing Attack with the Use of Language Models
- Authors: Alberto Castagnaro, Mauro Conti, Luca Pajola,
- Abstract summary: Offensive AI is a paradigm that integrates AI-based technologies in cyber attacks.
In this work, we explore whether AI can enhance the directory enumeration process and propose a novel Language Model-based framework.
Our experiments -- conducted in a testbed consisting of 1 million URLs from different web application domains -- demonstrate the superiority of the LM-based attack, with an average performance increase of 969%.
- Score: 16.89878267176532
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Web Vulnerability Assessment and Penetration Testing (Web VAPT) is a comprehensive cybersecurity process that uncovers a range of vulnerabilities which, if exploited, could compromise the integrity of web applications. In a VAPT, it is common to perform a \textit{Directory brute-forcing Attack}, aiming at the identification of accessible directories of a target website. Current commercial solutions are inefficient as they are based on brute-forcing strategies that use wordlists, resulting in enormous quantities of trials for a small amount of success. Offensive AI is a recent paradigm that integrates AI-based technologies in cyber attacks. In this work, we explore whether AI can enhance the directory enumeration process and propose a novel Language Model-based framework. Our experiments -- conducted in a testbed consisting of 1 million URLs from different web application domains (universities, hospitals, government, companies) -- demonstrate the superiority of the LM-based attack, with an average performance increase of 969%.
Related papers
- EIA: Environmental Injection Attack on Generalist Web Agents for Privacy Leakage [40.82238259404402]
We conduct the first study on the privacy risks of generalist web agents in adversarial environments.
First, we present a realistic threat model for attacks on the website, where we consider two adversarial targets: stealing users' specific PII or the entire user request.
We collect 177 action steps that involve diverse PII categories on realistic websites from the Mind2Web, and conduct experiments using one of the most capable generalist web agent frameworks to date.
arXiv Detail & Related papers (2024-09-17T15:49:44Z) - WILBUR: Adaptive In-Context Learning for Robust and Accurate Web Agents [1.9352015147920767]
We introduce Wilbur, an approach that uses a differentiable ranking model and a novel instruction synthesis technique.
We show that our ranking model can be trained on data from a generative auto-curriculum which samples representative goals.
Wilbur achieves state-of-the-art results on the WebVoyager benchmark, beating text-only models by 8% overall, and up to 36% on certain websites.
arXiv Detail & Related papers (2024-04-08T23:10:47Z) - Towards more Practical Threat Models in Artificial Intelligence Security [66.67624011455423]
Recent works have identified a gap between research and practice in artificial intelligence security.
We revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice.
arXiv Detail & Related papers (2023-11-16T16:09:44Z) - Baseline Defenses for Adversarial Attacks Against Aligned Language
Models [109.75753454188705]
Recent work shows that text moderations can produce jailbreaking prompts that bypass defenses.
We look at three types of defenses: detection (perplexity based), input preprocessing (paraphrase and retokenization), and adversarial training.
We find that the weakness of existing discretes for text, combined with the relatively high costs of optimization, makes standard adaptive attacks more challenging for LLMs.
arXiv Detail & Related papers (2023-09-01T17:59:44Z) - Getting pwn'd by AI: Penetration Testing with Large Language Models [0.0]
This paper explores the potential usage of large-language models, such as GPT3.5, to augment penetration testers with AI sparring partners.
We explore the feasibility of supplementing penetration testers with AI models for two distinct use cases: high-level task planning for security testing assignments and low-level vulnerability hunting within a vulnerable virtual machine.
arXiv Detail & Related papers (2023-07-24T19:59:22Z) - A LLM Assisted Exploitation of AI-Guardian [57.572998144258705]
We evaluate the robustness of AI-Guardian, a recent defense to adversarial examples published at IEEE S&P 2023.
We write none of the code to attack this model, and instead prompt GPT-4 to implement all attack algorithms following our instructions and guidance.
This process was surprisingly effective and efficient, with the language model at times producing code from ambiguous instructions faster than the author of this paper could have done.
arXiv Detail & Related papers (2023-07-20T17:33:25Z) - DDoD: Dual Denial of Decision Attacks on Human-AI Teams [29.584936458736813]
We propose textitDual Denial of Decision (DDoD) attacks against collaborative Human-AI teams.
We discuss how such attacks aim to deplete textitboth computational and human resources, and significantly impair decision-making capabilities.
arXiv Detail & Related papers (2022-12-07T22:30:17Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Towards Practical Deployment-Stage Backdoor Attack on Deep Neural
Networks [5.231607386266116]
We study the realistic threat of deployment-stage backdoor attacks on deep learning models.
We propose the first gray-box and physically realizable weights attack algorithm for backdoor injection.
Our results suggest the effectiveness and practicality of the proposed attack algorithm.
arXiv Detail & Related papers (2021-11-25T08:25:27Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.