Revisiting Technical Bias Mitigation Strategies
- URL: http://arxiv.org/abs/2410.17433v1
- Date: Tue, 22 Oct 2024 21:17:19 GMT
- Title: Revisiting Technical Bias Mitigation Strategies
- Authors: Abdoul Jalil Djiberou Mahamadou, Artem A. Trotsyuk,
- Abstract summary: Efforts to mitigate bias and enhance fairness in the artificial intelligence (AI) community have predominantly focused on technical solutions.
While numerous reviews have addressed bias in AI, this review uniquely focuses on the practical limitations of technical solutions in healthcare settings.
We illustrate each limitation with empirical studies focusing on healthcare and biomedical applications.
- Score: 0.11510009152620666
- License:
- Abstract: Efforts to mitigate bias and enhance fairness in the artificial intelligence (AI) community have predominantly focused on technical solutions. While numerous reviews have addressed bias in AI, this review uniquely focuses on the practical limitations of technical solutions in healthcare settings, providing a structured analysis across five key dimensions affecting their real-world implementation: who defines bias and fairness; which mitigation strategy to use and prioritize among dozens that are inconsistent and incompatible; when in the AI development stages the solutions are most effective; for which populations; and the context in which the solutions are designed. We illustrate each limitation with empirical studies focusing on healthcare and biomedical applications. Moreover, we discuss how value-sensitive AI, a framework derived from technology design, can engage stakeholders and ensure that their values are embodied in bias and fairness mitigation solutions. Finally, we discuss areas that require further investigation and provide practical recommendations to address the limitations covered in the study.
Related papers
- An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI data-intensive systems [49.1574468325115]
We show that human rights already underpin the decisions in the field of data use.
This work presents a methodology and a model for a Human Rights Impact Assessment (HRIA)
The proposed methodology is tested in concrete case-studies to prove its feasibility and effectiveness.
arXiv Detail & Related papers (2024-07-30T16:27:52Z) - Open Problems in Technical AI Governance [93.89102632003996]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.
This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - AI Alignment through Reinforcement Learning from Human Feedback? Contradictions and Limitations [0.2106667480549292]
We show the shortcomings of the broadly pursued alignment goals of honesty, harmlessness, and helpfulness.
We highlight tensions and contradictions inherent in the goals of RLxF.
We conclude by urging researchers and practitioners alike to critically assess the sociotechnical ramifications of RLxF.
arXiv Detail & Related papers (2024-06-26T13:42:13Z) - Fair by design: A sociotechnical approach to justifying the fairness of AI-enabled systems across the lifecycle [0.8164978442203773]
Fairness is one of the most commonly identified ethical principles in existing AI guidelines.
The development of fair AI-enabled systems is required by new and emerging AI regulation.
arXiv Detail & Related papers (2024-06-13T12:03:29Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - Predictable Artificial Intelligence [77.1127726638209]
This paper introduces the ideas and challenges of Predictable AI.
It explores the ways in which we can anticipate key validity indicators of present and future AI ecosystems.
We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - Tensions Between the Proxies of Human Values in AI [20.303537771118048]
We argue that the AI community needs to consider all the consequences of choosing certain formulations of these pillars.
We point towards sociotechnical research for frameworks for the latter, but push for broader efforts into implementing these in practice.
arXiv Detail & Related papers (2022-12-14T21:13:48Z) - Reinforcement Learning with Stepwise Fairness Constraints [50.538878453547966]
We introduce the study of reinforcement learning with stepwise fairness constraints.
We provide learning algorithms with strong theoretical guarantees in regard to policy optimality and fairness violation.
arXiv Detail & Related papers (2022-11-08T04:06:23Z) - AI Fairness: from Principles to Practice [0.0]
This paper summarizes and evaluates various approaches, methods, and techniques for pursuing fairness in AI systems.
It proposes practical guidelines for defining, measuring, and preventing bias in AI.
arXiv Detail & Related papers (2022-07-20T11:37:46Z) - Alternative models: Critical examination of disability definitions in
the development of artificial intelligence technologies [6.9884176767901005]
This article presents a framework for critically examining AI data analytics technologies through a disability lens.
We consider three conceptual models of disability: the medical model, the social model, and the relational model.
We show how AI technologies designed under each of these models differ so significantly as to be incompatible with and contradictory to one another.
arXiv Detail & Related papers (2022-06-16T16:41:23Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.