Auditing the Ethical Logic of Generative AI Models
- URL: http://arxiv.org/abs/2504.17544v1
- Date: Thu, 24 Apr 2025 13:32:30 GMT
- Title: Auditing the Ethical Logic of Generative AI Models
- Authors: W. Russell Neuman, Chad Coleman, Ali Dasdan, Safinah Ali, Manan Shah,
- Abstract summary: This paper introduces a five-dimensional audit model to evaluate the ethical logic of leading large language models (LLMs)<n>We benchmark seven major LLMs finding that while models generally converge on ethical decisions, they vary in explanatory rigor and moral prioritization.<n>Chain-of-Thought prompting and reasoning-optimized models significantly enhance performance on our audit metrics.
- Score: 6.0972634521845475
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: As generative AI models become increasingly integrated into high-stakes domains, the need for robust methods to evaluate their ethical reasoning becomes increasingly important. This paper introduces a five-dimensional audit model -- assessing Analytic Quality, Breadth of Ethical Considerations, Depth of Explanation, Consistency, and Decisiveness -- to evaluate the ethical logic of leading large language models (LLMs). Drawing on traditions from applied ethics and higher-order thinking, we present a multi-battery prompt approach, including novel ethical dilemmas, to probe the models' reasoning across diverse contexts. We benchmark seven major LLMs finding that while models generally converge on ethical decisions, they vary in explanatory rigor and moral prioritization. Chain-of-Thought prompting and reasoning-optimized models significantly enhance performance on our audit metrics. This study introduces a scalable methodology for ethical benchmarking of AI systems and highlights the potential for AI to complement human moral reasoning in complex decision-making contexts.
Related papers
- The Convergent Ethics of AI? Analyzing Moral Foundation Priorities in Large Language Models with a Multi-Framework Approach [6.0972634521845475]
This paper introduces the Priorities in Reasoning and Intrinsic Moral Evaluation (PRIME) framework.
PRIME is a comprehensive methodology for analyzing moral priorities across foundational ethical dimensions.
We apply this framework to six leading large language models (LLMs) through a dual-protocol approach.
arXiv Detail & Related papers (2025-04-27T14:26:48Z) - Bridging the Gap: Integrating Ethics and Environmental Sustainability in AI Research and Practice [57.94036023167952]
We argue that the efforts aiming to study AI's ethical ramifications should be made in tandem with those evaluating its impacts on the environment.<n>We propose best practices to better integrate AI ethics and sustainability in AI research and practice.
arXiv Detail & Related papers (2025-04-01T13:53:11Z) - Trade-offs in Large Reasoning Models: An Empirical Analysis of Deliberative and Adaptive Reasoning over Foundational Capabilities [101.77467538102924]
Recent advancements in Large Reasoning Models (LRMs) have demonstrated remarkable performance in specialized reasoning tasks.<n>We show that acquiring deliberative reasoning capabilities significantly reduces the foundational capabilities of LRMs.<n>We demonstrate that adaptive reasoning -- employing modes like Zero-Thinking, Less-Thinking, and Summary-Thinking -- can effectively alleviate these drawbacks.
arXiv Detail & Related papers (2025-03-23T08:18:51Z) - LogiDynamics: Unraveling the Dynamics of Logical Inference in Large Language Model Reasoning [49.58786377307728]
This paper adopts an exploratory approach by introducing a controlled evaluation environment for analogical reasoning.<n>We analyze the comparative dynamics of inductive, abductive, and deductive inference pipelines.<n>We investigate advanced paradigms such as hypothesis selection, verification, and refinement, revealing their potential to scale up logical inference.
arXiv Detail & Related papers (2025-02-16T15:54:53Z) - On the Reasoning Capacity of AI Models and How to Quantify It [0.0]
Large Language Models (LLMs) have intensified the debate surrounding the fundamental nature of their reasoning capabilities.
While achieving high performance on benchmarks such as GPQA and MMLU, these models exhibit limitations in more complex reasoning tasks.
We propose a novel phenomenological approach that goes beyond traditional accuracy metrics to probe the underlying mechanisms of model behavior.
arXiv Detail & Related papers (2025-01-23T16:58:18Z) - The Moral Mind(s) of Large Language Models [0.0]
Key questions arise as large language models (LLMs) become integrated into decision-making across various sectors.<n>We present approximately forty models from major providers with a structured set of ethical scenarios.<n>Our rationality tests revealed that at least one model from each provider exhibited behavior consistent with approximately stable moral principles.
arXiv Detail & Related papers (2024-11-19T15:40:16Z) - Large-scale moral machine experiment on large language models [0.0]
We evaluate moral judgments across 52 different Large Language Models (LLMs) in autonomous driving scenarios.<n> proprietary models and open-source models exceeding 10 billion parameters demonstrated relatively close alignment with human judgments.<n>However, model updates did not consistently improve alignment with human preferences, and many LLMs showed excessive emphasis on specific ethical principles.
arXiv Detail & Related papers (2024-11-11T08:36:49Z) - Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - Informed AI Regulation: Comparing the Ethical Frameworks of Leading LLM
Chatbots Using an Ethics-Based Audit to Assess Moral Reasoning and Normative
Values [0.0]
Ethics-based audits play a pivotal role in the rapidly growing fields of AI safety and regulation.
This paper undertakes an ethics-based audit to probe the 8 leading commercial and open-source Large Language Models including GPT-4.
arXiv Detail & Related papers (2024-01-09T14:57:30Z) - Unpacking the Ethical Value Alignment in Big Models [46.560886177083084]
This paper provides an overview of the risks and challenges associated with big models, surveys existing AI ethics guidelines, and examines the ethical implications arising from the limitations of these models.
We introduce a novel conceptual paradigm for aligning the ethical values of big models and discuss promising research directions for alignment criteria, evaluation, and method.
arXiv Detail & Related papers (2023-10-26T16:45:40Z) - Are Neural Topic Models Broken? [81.15470302729638]
We study the relationship between automated and human evaluation of topic models.
We find that neural topic models fare worse in both respects compared to an established classical method.
arXiv Detail & Related papers (2022-10-28T14:38:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.