A Guide to Stakeholder Analysis for Cybersecurity Researchers
- URL: http://arxiv.org/abs/2508.14796v1
- Date: Wed, 20 Aug 2025 15:48:19 GMT
- Title: A Guide to Stakeholder Analysis for Cybersecurity Researchers
- Authors: James C Davis, Sophie Chen, Huiyun Peng, Paschal C Amusuo, Kelechi G Kalu,
- Abstract summary: stakeholder-based ethics analysis is now a formal requirement for submissions to top cybersecurity research venues.<n>This guide provides practical support for that requirement by enumerating stakeholder types and mapping them to common empirical research methods.<n>Our goal is to help research teams meet new ethics mandates with confidence and clarity, not confusion.
- Score: 2.7325338323814328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stakeholder-based ethics analysis is now a formal requirement for submissions to top cybersecurity research venues. This requirement reflects a growing consensus that cybersecurity researchers must go beyond providing capabilities to anticipating and mitigating the potential harms thereof. However, many cybersecurity researchers may be uncertain about how to proceed in an ethics analysis. In this guide, we provide practical support for that requirement by enumerating stakeholder types and mapping them to common empirical research methods. We also offer worked examples to demonstrate how researchers can identify likely stakeholder exposures in real-world projects. Our goal is to help research teams meet new ethics mandates with confidence and clarity, not confusion.
Related papers
- Mirror: A Multi-Agent System for AI-Assisted Ethics Review [104.3684024153469]
Mirror is an agentic framework for AI-assisted ethical review.<n>It integrates ethical reasoning, structured rule interpretation, and multi-agent deliberation within a unified architecture.
arXiv Detail & Related papers (2026-02-09T03:38:55Z) - Leveraging Large Language Models for Cybersecurity Risk Assessment -- A Case from Forestry Cyber-Physical Systems [2.767743577831705]
In safety-critical software systems, cybersecurity activities become essential.<n>In many software teams, cybersecurity experts are either entirely absent or represented by only a small number of specialists.<n>This creates a need for a tool to support cybersecurity experts and engineers in evaluating vulnerabilities and threats.
arXiv Detail & Related papers (2025-10-07T18:07:16Z) - Identity Theft in AI Conference Peer Review [50.18240135317708]
We discuss newly uncovered cases of identity theft in the scientific peer-review process within artificial intelligence (AI) research.<n>We detail how dishonest researchers exploit the peer-review system by creating fraudulent reviewer profiles to manipulate paper evaluations.
arXiv Detail & Related papers (2025-08-06T02:36:52Z) - On the Ethics of Using LLMs for Offensive Security [3.11537581064266]
Large Language Models (LLMs) have rapidly evolved over the past few years and are currently evaluated for their efficacy within the domain of offensive cyber-security.<n>This paper analyzes a set of papers that leverage LLMs for offensive security, focusing on how ethical considerations are expressed and justified in their work.
arXiv Detail & Related papers (2025-06-10T11:11:55Z) - Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives [47.17703009473386]
Powerful AI models have led to impressive leaps in performance across a wide range of tasks.
Privacy concerns have led to a wealth of literature covering various privacy risks and vulnerabilities of AI models.
We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS.
arXiv Detail & Related papers (2024-07-02T07:49:48Z) - A Survey of Privacy-Preserving Model Explanations: Privacy Risks, Attacks, and Countermeasures [50.987594546912725]
Despite a growing corpus of research in AI privacy and explainability, there is little attention on privacy-preserving model explanations.
This article presents the first thorough survey about privacy attacks on model explanations and their countermeasures.
arXiv Detail & Related papers (2024-03-31T12:44:48Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - Risks of AI Scientists: Prioritizing Safeguarding Over Autonomy [65.77763092833348]
This perspective examines vulnerabilities in AI scientists, shedding light on potential risks associated with their misuse.<n>We take into account user intent, the specific scientific domain, and their potential impact on the external environment.<n>We propose a triadic framework involving human regulation, agent alignment, and an understanding of environmental feedback.
arXiv Detail & Related papers (2024-02-06T18:54:07Z) - Practical Cybersecurity Ethics: Mapping CyBOK to Ethical Concerns [13.075370397377078]
We use ongoing work on the Cyber Security Body of Knowledge (CyBOK) to help elicit and document the responsibilities and ethics of the profession.
Based on a literature review of the ethics of cybersecurity, we use CyBOK to frame the exploration of ethical challenges in the cybersecurity profession.
Our findings indicate that there are broad ethical challenges across the whole of cybersecurity, but also that different areas of cybersecurity can face specific ethical considerations.
arXiv Detail & Related papers (2023-11-16T19:44:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.