Ethics in conversation: Building an ethics assurance case for autonomous
AI-enabled voice agents in healthcare
- URL: http://arxiv.org/abs/2305.14182v1
- Date: Tue, 23 May 2023 16:04:59 GMT
- Title: Ethics in conversation: Building an ethics assurance case for autonomous
AI-enabled voice agents in healthcare
- Authors: Marten H. L. Kaas, Zoe Porter, Ernest Lim, Aisling Higham, Sarah
Khavandi and Ibrahim Habli
- Abstract summary: The principles-based ethics assurance argument pattern is one proposal in the AI ethics landscape.
This paper presents the interim findings of a case study applying this ethics assurance framework to the use of Dora, an AI-based telemedicine system.
- Score: 1.8964739087256175
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The deployment and use of AI systems should be both safe and broadly
ethically acceptable. The principles-based ethics assurance argument pattern is
one proposal in the AI ethics landscape that seeks to support and achieve that
aim. The purpose of this argument pattern or framework is to structure
reasoning about, and to communicate and foster confidence in, the ethical
acceptability of uses of specific real-world AI systems in complex
socio-technical contexts. This paper presents the interim findings of a case
study applying this ethics assurance framework to the use of Dora, an AI-based
telemedicine system, to assess its viability and usefulness as an approach. The
case study process to date has revealed some of the positive ethical impacts of
the Dora platform, as well as unexpected insights and areas to prioritise for
evaluation, such as risks to the frontline clinician, particularly in respect
of clinician autonomy. The ethics assurance argument pattern offers a practical
framework not just for identifying issues to be addressed, but also to start to
construct solutions in the form of adjustments to the distribution of benefits,
risks and constraints on human autonomy that could reduce ethical disparities
across affected stakeholders. Though many challenges remain, this research
represents a step in the direction towards the development and use of safe and
ethically acceptable AI systems and, ideally, a shift towards more
comprehensive and inclusive evaluations of AI systems in general.
Related papers
- Fair by design: A sociotechnical approach to justifying the fairness of AI-enabled systems across the lifecycle [0.8164978442203773]
Fairness is one of the most commonly identified ethical principles in existing AI guidelines.
The development of fair AI-enabled systems is required by new and emerging AI regulation.
arXiv Detail & Related papers (2024-06-13T12:03:29Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Investigating Responsible AI for Scientific Research: An Empirical Study [4.597781832707524]
The push for Responsible AI (RAI) in such institutions underscores the increasing emphasis on integrating ethical considerations within AI design and development.
This paper aims to assess the awareness and preparedness regarding the ethical risks inherent in AI design and development.
Our results have revealed certain knowledge gaps concerning ethical, responsible, and inclusive AI, with limitations in awareness of the available AI ethics frameworks.
arXiv Detail & Related papers (2023-12-15T06:40:27Z) - Towards A Unified Utilitarian Ethics Framework for Healthcare Artificial
Intelligence [0.08192907805418582]
This study attempts to identify the major ethical principles influencing the utility performance of AI at different technological levels.
Justice, privacy, bias, lack of regulations, risks, and interpretability are the most important principles to consider for ethical AI.
We propose a new utilitarian ethics-based theoretical framework for designing ethical AI for the healthcare domain.
arXiv Detail & Related papers (2023-09-26T02:10:58Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - A Principles-based Ethics Assurance Argument Pattern for AI and
Autonomous Systems [5.45210704757922]
An emerging proposition within the trustworthy AI and autonomous systems (AI/AS) research community is to use assurance cases to instil justified confidence.
This paper substantially develops the proposition and makes it concrete.
It brings together the assurance case methodology with a set of ethical principles to structure a principles-based ethics assurance argument pattern.
arXiv Detail & Related papers (2022-03-29T09:08:03Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - An Ecosystem Approach to Ethical AI and Data Use: Experimental
Reflections [0.0]
This paper offers a methodology to identify the needs of AI practitioners when it comes to confronting and resolving ethical challenges.
We offer a grassroots approach to operational ethics based on dialog and mutualised responsibility.
arXiv Detail & Related papers (2020-12-27T07:41:26Z) - Case Study: Deontological Ethics in NLP [119.53038547411062]
We study one ethical theory, namely deontological ethics, from the perspective of NLP.
In particular, we focus on the generalization principle and the respect for autonomy through informed consent.
We provide four case studies to demonstrate how these principles can be used with NLP systems.
arXiv Detail & Related papers (2020-10-09T16:04:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.