Fair by design: A sociotechnical approach to justifying the fairness of AI-enabled systems across the lifecycle
- URL: http://arxiv.org/abs/2406.09029v1
- Date: Thu, 13 Jun 2024 12:03:29 GMT
- Title: Fair by design: A sociotechnical approach to justifying the fairness of AI-enabled systems across the lifecycle
- Authors: Marten H. L. Kaas, Christopher Burr, Zoe Porter, Berk Ozturk, Philippa Ryan, Michael Katell, Nuala Polo, Kalle Westerling, Ibrahim Habli,
- Abstract summary: Fairness is one of the most commonly identified ethical principles in existing AI guidelines.
The development of fair AI-enabled systems is required by new and emerging AI regulation.
- Score: 0.8164978442203773
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fairness is one of the most commonly identified ethical principles in existing AI guidelines, and the development of fair AI-enabled systems is required by new and emerging AI regulation. But most approaches to addressing the fairness of AI-enabled systems are limited in scope in two significant ways: their substantive content focuses on statistical measures of fairness, and they do not emphasize the need to identify and address fairness considerations across the whole AI lifecycle. Our contribution is to present an assurance framework and tool that can enable a practical and transparent method for widening the scope of fairness considerations across the AI lifecycle and move the discussion beyond mere statistical notions of fairness to consider a richer analysis in a practical and context-dependent manner. To illustrate this approach, we first describe and then apply the framework of Trustworthy and Ethical Assurance (TEA) to an AI-enabled clinical diagnostic support system (CDSS) whose purpose is to help clinicians predict the risk of developing hypertension in patients with Type 2 diabetes, a context in which several fairness considerations arise (e.g., discrimination against patient subgroups). This is supplemented by an open-source tool and a fairness considerations map to help facilitate reasoning about the fairness of AI-enabled systems in a participatory way. In short, by using a shared framework for identifying, documenting and justifying fairness considerations, and then using this deliberative exercise to structure an assurance case, research on AI fairness becomes reusable and generalizable for others in the ethical AI community and for sharing best practices for achieving fairness and equity in digital health and healthcare in particular.
Related papers
- Towards Clinical AI Fairness: Filling Gaps in the Puzzle [15.543248260582217]
This review systematically pinpoints several deficiencies concerning both healthcare data and the provided AI fairness solutions.
We highlight the scarcity of research on AI fairness in many medical domains where AI technology is increasingly utilized.
To bridge these gaps, our review advances actionable strategies for both the healthcare and AI research communities.
arXiv Detail & Related papers (2024-05-28T07:42:55Z) - The Impossibility of Fair LLMs [59.424918263776284]
The need for fair AI is increasingly clear in the era of large language models (LLMs)
We review the technical frameworks that machine learning researchers have used to evaluate fairness.
We develop guidelines for the more realistic goal of achieving fairness in particular use cases.
arXiv Detail & Related papers (2024-05-28T04:36:15Z) - AI Fairness in Practice [0.46671368497079174]
There is a broad spectrum of views across society on what the concept of fairness means and how it should be put to practice.
This workbook explores how a context-based approach to understanding AI Fairness can help project teams better identify, mitigate, and manage the many ways that unfair bias and discrimination can crop up across the AI project workflow.
arXiv Detail & Related papers (2024-02-19T23:02:56Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Towards A Unified Utilitarian Ethics Framework for Healthcare Artificial
Intelligence [0.08192907805418582]
This study attempts to identify the major ethical principles influencing the utility performance of AI at different technological levels.
Justice, privacy, bias, lack of regulations, risks, and interpretability are the most important principles to consider for ethical AI.
We propose a new utilitarian ethics-based theoretical framework for designing ethical AI for the healthcare domain.
arXiv Detail & Related papers (2023-09-26T02:10:58Z) - FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare [73.78776682247187]
Concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI.
This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.
arXiv Detail & Related papers (2023-08-11T10:49:05Z) - Ethics in conversation: Building an ethics assurance case for autonomous
AI-enabled voice agents in healthcare [1.8964739087256175]
The principles-based ethics assurance argument pattern is one proposal in the AI ethics landscape.
This paper presents the interim findings of a case study applying this ethics assurance framework to the use of Dora, an AI-based telemedicine system.
arXiv Detail & Related papers (2023-05-23T16:04:59Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Getting Fairness Right: Towards a Toolbox for Practitioners [2.4364387374267427]
The potential risk of AI systems unintentionally embedding and reproducing bias has attracted the attention of machine learning practitioners and society at large.
This paper proposes to draft a toolbox which helps practitioners to ensure fair AI practices.
arXiv Detail & Related papers (2020-03-15T20:53:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.