A Principles-based Ethics Assurance Argument Pattern for AI and
Autonomous Systems
- URL: http://arxiv.org/abs/2203.15370v4
- Date: Tue, 6 Jun 2023 14:04:04 GMT
- Title: A Principles-based Ethics Assurance Argument Pattern for AI and
Autonomous Systems
- Authors: Zoe Porter, Ibrahim Habli, John McDermid, Marten Kaas
- Abstract summary: An emerging proposition within the trustworthy AI and autonomous systems (AI/AS) research community is to use assurance cases to instil justified confidence.
This paper substantially develops the proposition and makes it concrete.
It brings together the assurance case methodology with a set of ethical principles to structure a principles-based ethics assurance argument pattern.
- Score: 5.45210704757922
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An assurance case is a structured argument, typically produced by safety
engineers, to communicate confidence that a critical or complex system, such as
an aircraft, will be acceptably safe within its intended context. Assurance
cases often inform third party approval of a system. One emerging proposition
within the trustworthy AI and autonomous systems (AI/AS) research community is
to use assurance cases to instil justified confidence that specific AI/AS will
be ethically acceptable when operational in well-defined contexts. This paper
substantially develops the proposition and makes it concrete. It brings
together the assurance case methodology with a set of ethical principles to
structure a principles-based ethics assurance argument pattern. The principles
are justice, beneficence, non-maleficence, and respect for human autonomy, with
the principle of transparency playing a supporting role. The argument pattern,
shortened to the acronym PRAISE, is described. The objective of the proposed
PRAISE argument pattern is to provide a reusable template for individual ethics
assurance cases, by which engineers, developers, operators, or regulators could
justify, communicate, or challenge a claim about the overall ethical
acceptability of the use of a specific AI/AS in a given socio-technical
context. We apply the pattern to the hypothetical use case of an autonomous
robo-taxi service in a city centre.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Ethical and Scalable Automation: A Governance and Compliance Framework for Business Applications [0.0]
This paper introduces a framework ensuring that AI must be ethical, controllable, viable, and desirable.
Different case studies validate this framework by integrating AI in both academic and practical environments.
arXiv Detail & Related papers (2024-09-25T12:39:28Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Informed AI Regulation: Comparing the Ethical Frameworks of Leading LLM
Chatbots Using an Ethics-Based Audit to Assess Moral Reasoning and Normative
Values [0.0]
Ethics-based audits play a pivotal role in the rapidly growing fields of AI safety and regulation.
This paper undertakes an ethics-based audit to probe the 8 leading commercial and open-source Large Language Models including GPT-4.
arXiv Detail & Related papers (2024-01-09T14:57:30Z) - Unpacking the Ethical Value Alignment in Big Models [46.560886177083084]
This paper provides an overview of the risks and challenges associated with big models, surveys existing AI ethics guidelines, and examines the ethical implications arising from the limitations of these models.
We introduce a novel conceptual paradigm for aligning the ethical values of big models and discuss promising research directions for alignment criteria, evaluation, and method.
arXiv Detail & Related papers (2023-10-26T16:45:40Z) - Ethics in conversation: Building an ethics assurance case for autonomous
AI-enabled voice agents in healthcare [1.8964739087256175]
The principles-based ethics assurance argument pattern is one proposal in the AI ethics landscape.
This paper presents the interim findings of a case study applying this ethics assurance framework to the use of Dora, an AI-based telemedicine system.
arXiv Detail & Related papers (2023-05-23T16:04:59Z) - Trustworthy AI: From Principles to Practices [44.67324097900778]
Many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc.
In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems.
To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems.
arXiv Detail & Related papers (2021-10-04T03:20:39Z) - AI-Ethics by Design. Evaluating Public Perception on the Importance of
Ethical Design Principles of AI [0.0]
We investigate how ethical principles are weighted in comparison to each other.
We show that different preference models for ethically designed systems exist among the German population.
arXiv Detail & Related papers (2021-06-01T09:01:14Z) - Taking Principles Seriously: A Hybrid Approach to Value Alignment [7.75406296593749]
We propose that designers of value alignment (VA) systems incorporate ethics by utilizing a hybrid approach.
We show how principles derived from deontological ethics imply particular "test propositions" for any given action plan in an AI rule base.
This permits empirical VA to integrate seamlessly with independently justified ethical principles.
arXiv Detail & Related papers (2020-12-21T22:05:07Z) - Learning from Learning Machines: Optimisation, Rules, and Social Norms [91.3755431537592]
It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making.
Recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
arXiv Detail & Related papers (2019-12-29T17:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.