On the Ethics of Using LLMs for Offensive Security
- URL: http://arxiv.org/abs/2506.08693v1
- Date: Tue, 10 Jun 2025 11:11:55 GMT
- Title: On the Ethics of Using LLMs for Offensive Security
- Authors: Andreas Happe, Jürgen Cito,
- Abstract summary: Large Language Models (LLMs) have rapidly evolved over the past few years and are currently evaluated for their efficacy within the domain of offensive cyber-security.<n>This paper analyzes a set of papers that leverage LLMs for offensive security, focusing on how ethical considerations are expressed and justified in their work.
- Score: 3.11537581064266
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) have rapidly evolved over the past few years and are currently evaluated for their efficacy within the domain of offensive cyber-security. While initial forays showcase the potential of LLMs to enhance security research, they also raise critical ethical concerns regarding the dual-use of offensive security tooling. This paper analyzes a set of papers that leverage LLMs for offensive security, focusing on how ethical considerations are expressed and justified in their work. The goal is to assess the culture of AI in offensive security research regarding ethics communication, highlighting trends, best practices, and gaps in current discourse. We provide insights into how the academic community navigates the fine line between innovation and ethical responsibility. Particularly, our results show that 13 of 15 reviewed prototypes (86.6\%) mentioned ethical considerations and are thus aware of the potential dual-use of their research. Main motivation given for the research was allowing broader access to penetration-testing as well as preparing defenders for AI-guided attackers.
Related papers
- LLM Can be a Dangerous Persuader: Empirical Study of Persuasion Safety in Large Language Models [47.27098710953806]
We introduce PersuSafety, the first comprehensive framework for the assessment of persuasion safety.<n>PersuSafety covers 6 diverse unethical persuasion topics and 15 common unethical strategies.<n>Our study calls for more attention to improve safety alignment in progressive and goal-driven conversations such as persuasion.
arXiv Detail & Related papers (2025-04-14T17:20:34Z) - The Only Way is Ethics: A Guide to Ethical Research with Large Language Models [53.316174782223115]
'LLM Ethics Whitepaper' is an open resource for NLP practitioners and those tasked with evaluating the ethical implications of others' work.<n>Our goal is to translate ethics literature into concrete recommendations and provocations for thinking with clear first steps.<n>'LLM Ethics Whitepaper' distils a thorough literature review into clear Do's and Don'ts, which we present also in this paper.
arXiv Detail & Related papers (2024-12-20T16:14:43Z) - Transparency, Security, and Workplace Training & Awareness in the Age of Generative AI [0.0]
As AI technologies advance, ethical considerations, transparency, data privacy, and their impact on human labor intersect with the drive for innovation and efficiency.<n>Our research explores publicly accessible large language models (LLMs) that often operate on the periphery, away from mainstream scrutiny.<n>Specifically, we examine Gab AI, a platform that centers around unrestricted communication and privacy, allowing users to interact freely without censorship.
arXiv Detail & Related papers (2024-12-19T17:40:58Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - AI-Enhanced Ethical Hacking: A Linux-Focused Experiment [2.3020018305241337]
The study evaluates GenAI's effectiveness across the key stages of penetration testing on Linux-based target machines.
The report critically examines potential risks such as misuse, data biases, hallucination, and over-reliance on AI.
arXiv Detail & Related papers (2024-10-07T15:02:47Z) - Beyond principlism: Practical strategies for ethical AI use in research practices [0.0]
The rapid adoption of generative artificial intelligence in scientific research has outpaced the development of ethical guidelines.
Existing approaches offer little practical guidance for addressing ethical challenges of AI in scientific research practices.
I propose a user-centered, realism-inspired approach to bridge the gap between abstract principles and day-to-day research practices.
arXiv Detail & Related papers (2024-01-27T03:53:25Z) - The Ethics of Interaction: Mitigating Security Threats in LLMs [1.407080246204282]
The paper delves into the nuanced ethical repercussions of such security threats on society and individual privacy.
We scrutinize five major threats--prompt injection, jailbreaking, Personal Identifiable Information (PII) exposure, sexually explicit content, and hate-based content--to assess their critical ethical consequences and the urgency they create for robust defensive strategies.
arXiv Detail & Related papers (2024-01-22T17:11:37Z) - Towards more Practical Threat Models in Artificial Intelligence Security [66.67624011455423]
Recent works have identified a gap between research and practice in artificial intelligence security.
We revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice.
arXiv Detail & Related papers (2023-11-16T16:09:44Z) - Applying Standards to Advance Upstream & Downstream Ethics in Large
Language Models [0.0]
This paper explores how AI-owners can develop safeguards for AI-generated content.
It draws from established codes of conduct and ethical standards in other content-creation industries.
arXiv Detail & Related papers (2023-06-06T08:47:42Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - On the Morality of Artificial Intelligence [154.69452301122175]
We propose conceptual and practical principles and guidelines for Machine Learning research and deployment.
We insist on concrete actions that can be taken by practitioners to pursue a more ethical and moral practice of ML aimed at using AI for social good.
arXiv Detail & Related papers (2019-12-26T23:06:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.