Informed AI Regulation: Comparing the Ethical Frameworks of Leading LLM
Chatbots Using an Ethics-Based Audit to Assess Moral Reasoning and Normative
Values
- URL: http://arxiv.org/abs/2402.01651v1
- Date: Tue, 9 Jan 2024 14:57:30 GMT
- Title: Informed AI Regulation: Comparing the Ethical Frameworks of Leading LLM
Chatbots Using an Ethics-Based Audit to Assess Moral Reasoning and Normative
Values
- Authors: Jon Chun and Katherine Elkins
- Abstract summary: Ethics-based audits play a pivotal role in the rapidly growing fields of AI safety and regulation.
This paper undertakes an ethics-based audit to probe the 8 leading commercial and open-source Large Language Models including GPT-4.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rise of individual and collaborative networks of autonomous agents,
AI is deployed in more key reasoning and decision-making roles. For this
reason, ethics-based audits play a pivotal role in the rapidly growing fields
of AI safety and regulation. This paper undertakes an ethics-based audit to
probe the 8 leading commercial and open-source Large Language Models including
GPT-4. We assess explicability and trustworthiness by a) establishing how well
different models engage in moral reasoning and b) comparing normative values
underlying models as ethical frameworks. We employ an experimental,
evidence-based approach that challenges the models with ethical dilemmas in
order to probe human-AI alignment. The ethical scenarios are designed to
require a decision in which the particulars of the situation may or may not
necessitate deviating from normative ethical principles. A sophisticated
ethical framework was consistently elicited in one model, GPT-4. Nonetheless,
troubling findings include underlying normative frameworks with clear bias
towards particular cultural norms. Many models also exhibit disturbing
authoritarian tendencies. Code is available at
https://github.com/jonchun/llm-sota-chatbots-ethics-based-audit.
Related papers
- Ethical and Scalable Automation: A Governance and Compliance Framework for Business Applications [0.0]
This paper introduces a framework ensuring that AI must be ethical, controllable, viable, and desirable.
Different case studies validate this framework by integrating AI in both academic and practical environments.
arXiv Detail & Related papers (2024-09-25T12:39:28Z) - Exploring and steering the moral compass of Large Language Models [55.2480439325792]
Large Language Models (LLMs) have become central to advancing automation and decision-making across various sectors.
This study proposes a comprehensive comparative analysis of the most advanced LLMs to assess their moral profiles.
arXiv Detail & Related papers (2024-05-27T16:49:22Z) - Ethical-Lens: Curbing Malicious Usages of Open-Source Text-to-Image Models [51.69735366140249]
We introduce Ethical-Lens, a framework designed to facilitate the value-aligned usage of text-to-image tools.
Ethical-Lens ensures value alignment in text-to-image models across toxicity and bias dimensions.
Our experiments reveal that Ethical-Lens enhances alignment capabilities to levels comparable with or superior to commercial models.
arXiv Detail & Related papers (2024-04-18T11:38:25Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Unpacking the Ethical Value Alignment in Big Models [46.560886177083084]
This paper provides an overview of the risks and challenges associated with big models, surveys existing AI ethics guidelines, and examines the ethical implications arising from the limitations of these models.
We introduce a novel conceptual paradigm for aligning the ethical values of big models and discuss promising research directions for alignment criteria, evaluation, and method.
arXiv Detail & Related papers (2023-10-26T16:45:40Z) - Ethical Reasoning over Moral Alignment: A Case and Framework for
In-Context Ethical Policies in LLMs [19.675262411557235]
We argue that instead of morally aligning LLMs to specific set of ethical principles, we should infuse generic ethical reasoning capabilities into them.
We develop a framework that integrates moral dilemmas with moral principles pertaining to different foramlisms of normative ethics.
arXiv Detail & Related papers (2023-10-11T07:27:34Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - AI-Ethics by Design. Evaluating Public Perception on the Importance of
Ethical Design Principles of AI [0.0]
We investigate how ethical principles are weighted in comparison to each other.
We show that different preference models for ethically designed systems exist among the German population.
arXiv Detail & Related papers (2021-06-01T09:01:14Z) - Ethics-Based Auditing to Develop Trustworthy AI [0.0]
We argue that ethics-based auditing can improve the quality of decision making, increase user satisfaction, unlock growth potential, enable law-making, and relieve human suffering.
To be feasible and effective, ethics-based auditing should take the form of a continuous and constructive process, approach ethical alignment from a system perspective, and be aligned with public policies and incentives for ethically desirable behaviour.
arXiv Detail & Related papers (2021-04-30T11:39:40Z) - Case Study: Deontological Ethics in NLP [119.53038547411062]
We study one ethical theory, namely deontological ethics, from the perspective of NLP.
In particular, we focus on the generalization principle and the respect for autonomy through informed consent.
We provide four case studies to demonstrate how these principles can be used with NLP systems.
arXiv Detail & Related papers (2020-10-09T16:04:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.