Multi-Agent LLMs as Ethics Advocates for AI-Based Systems
- URL: http://arxiv.org/abs/2507.08392v3
- Date: Tue, 26 Aug 2025 05:43:26 GMT
- Title: Multi-Agent LLMs as Ethics Advocates for AI-Based Systems
- Authors: Asma Yamani, Malak Baslyman, Moataz Ahmed,
- Abstract summary: This study proposes a framework for generating ethics requirements by introducing an ethics advocate agent in a multi-agent LLM setting.<n>This agent critiques and provides input on ethical issues based on the system description.<n>We believe this work can facilitate the broader adoption of ethics in the requirements engineering process, ultimately leading to more ethically aligned products.
- Score: 2.1665689529884697
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Incorporating ethics into the requirement elicitation process is essential for creating ethically aligned systems. Although eliciting manual ethics requirements is effective, it requires diverse input from multiple stakeholders, which can be challenging due to time and resource constraints. Moreover, it is often given a low priority in the requirements elicitation process. This study proposes a framework for generating ethics requirements drafts by introducing an ethics advocate agent in a multi-agent LLM setting. This agent critiques and provides input on ethical issues based on the system description. The proposed framework is evaluated through two case studies from different contexts, demonstrating that it captures the majority of ethics requirements identified by researchers during 30-minute interviews and introduces several additional relevant requirements. However, it also highlights reliability issues in generating ethics requirements, emphasizing the need for human feedback in this sensitive domain. We believe this work can facilitate the broader adoption of ethics in the requirements engineering process, ultimately leading to more ethically aligned products.
Related papers
- Mirror: A Multi-Agent System for AI-Assisted Ethics Review [104.3684024153469]
Mirror is an agentic framework for AI-assisted ethical review.<n>It integrates ethical reasoning, structured rule interpretation, and multi-agent deliberation within a unified architecture.
arXiv Detail & Related papers (2026-02-09T03:38:55Z) - Fuzzy Representation of Norms [1.0098114696565863]
This paper proposes a logical representation of SLEEC rules and presents a methodology to embed these ethical requirements using test-score semantics and fuzzy logic.<n>The use of fuzzy logic is motivated by the view of ethics as a domain of possibilities, which allows the resolution of ethical dilemmas that AI systems may encounter.
arXiv Detail & Related papers (2026-01-06T12:51:18Z) - From Values to Frameworks: A Qualitative Study of Ethical Reasoning in Agentic AI Practitioners [0.0]
Agentic artificial intelligence systems are autonomous technologies capable of pursuing complex goals with minimal human oversight.<n>While these systems promise major gains in productivity, they also raise new ethical challenges.<n>This paper investigates the ethical reasoning of AI practitioners through qualitative interviews centered on structured dilemmas in agentic AI deployment.
arXiv Detail & Related papers (2025-12-24T00:58:41Z) - The AI Imperative: Scaling High-Quality Peer Review in Machine Learning [49.87236114682497]
We argue that AI-assisted peer review must become an urgent research and infrastructure priority.<n>We propose specific roles for AI in enhancing factual verification, guiding reviewer performance, assisting authors in quality improvement, and supporting ACs in decision-making.
arXiv Detail & Related papers (2025-06-09T18:37:14Z) - The Only Way is Ethics: A Guide to Ethical Research with Large Language Models [53.316174782223115]
'LLM Ethics Whitepaper' is an open resource for NLP practitioners and those tasked with evaluating the ethical implications of others' work.<n>Our goal is to translate ethics literature into concrete recommendations and provocations for thinking with clear first steps.<n>'LLM Ethics Whitepaper' distils a thorough literature review into clear Do's and Don'ts, which we present also in this paper.
arXiv Detail & Related papers (2024-12-20T16:14:43Z) - Can We Trust AI Agents? A Case Study of an LLM-Based Multi-Agent System for Ethical AI [10.084913433923566]
AI-based systems impact millions by supporting diverse tasks but face issues like misinformation, bias, and misuse.<n>This study examines the use of Large Language Models (LLM) for AI ethics in practice.<n>We design a prototype, where agents engage in structured discussions on real-world AI ethics issues from the AI Incident Database.
arXiv Detail & Related papers (2024-10-25T20:17:59Z) - Informed AI Regulation: Comparing the Ethical Frameworks of Leading LLM
Chatbots Using an Ethics-Based Audit to Assess Moral Reasoning and Normative
Values [0.0]
Ethics-based audits play a pivotal role in the rapidly growing fields of AI safety and regulation.
This paper undertakes an ethics-based audit to probe the 8 leading commercial and open-source Large Language Models including GPT-4.
arXiv Detail & Related papers (2024-01-09T14:57:30Z) - Unpacking the Ethical Value Alignment in Big Models [46.560886177083084]
This paper provides an overview of the risks and challenges associated with big models, surveys existing AI ethics guidelines, and examines the ethical implications arising from the limitations of these models.
We introduce a novel conceptual paradigm for aligning the ethical values of big models and discuss promising research directions for alignment criteria, evaluation, and method.
arXiv Detail & Related papers (2023-10-26T16:45:40Z) - EALM: Introducing Multidimensional Ethical Alignment in Conversational
Information Retrieval [43.72331337131317]
We introduce a workflow that integrates ethical alignment with an initial ethical judgment stage for efficient data screening.
We present the QA-ETHICS dataset adapted from the ETHICS benchmark, which serves as an evaluation tool by unifying scenarios and label meanings.
In addition, we suggest a new approach that achieves top performance in both binary and multi-label ethical judgment tasks.
arXiv Detail & Related papers (2023-10-02T08:22:34Z) - Implementing AI Ethics: Making Sense of the Ethical Requirements [6.244518754129957]
We use Trustworthy Ethics guidelines for Trustworthy AI as our reference for ethical requirements and an Agile portfolio management framework to analyze implementation.
Our findings reveal a general consideration of privacy and data governance ethical requirements as legal requirements with no other consideration for ethical requirements identified.
The findings also show practicable consideration of ethical requirements as technical robustness and safety for implementation as risk requirements and societal and environmental well-being for implementation as sustainability requirements.
arXiv Detail & Related papers (2023-06-11T19:13:36Z) - Ethics in conversation: Building an ethics assurance case for autonomous
AI-enabled voice agents in healthcare [1.8964739087256175]
The principles-based ethics assurance argument pattern is one proposal in the AI ethics landscape.
This paper presents the interim findings of a case study applying this ethics assurance framework to the use of Dora, an AI-based telemedicine system.
arXiv Detail & Related papers (2023-05-23T16:04:59Z) - AiSocrates: Towards Answering Ethical Quandary Questions [51.53350252548668]
AiSocrates is a system for deliberative exchange of different perspectives to an ethical quandary.
We show that AiSocrates generates promising answers to ethical quandary questions with multiple perspectives.
We argue that AiSocrates is a promising step toward developing an NLP system that incorporates human values explicitly by prompt instructions.
arXiv Detail & Related papers (2022-05-12T09:52:59Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Case Study: Deontological Ethics in NLP [119.53038547411062]
We study one ethical theory, namely deontological ethics, from the perspective of NLP.
In particular, we focus on the generalization principle and the respect for autonomy through informed consent.
We provide four case studies to demonstrate how these principles can be used with NLP systems.
arXiv Detail & Related papers (2020-10-09T16:04:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.