Fairness: from the ethical principle to the practice of Machine Learning
development as an ongoing agreement with stakeholders
- URL: http://arxiv.org/abs/2304.06031v1
- Date: Wed, 22 Mar 2023 20:58:32 GMT
- Title: Fairness: from the ethical principle to the practice of Machine Learning
development as an ongoing agreement with stakeholders
- Authors: Georgina Curto and Flavio Comim
- Abstract summary: This paper clarifies why bias cannot be completely mitigated in Machine Learning (ML)
It proposes an end-to-end methodology to translate the ethical principle of justice and fairness into the practice of ML development.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper clarifies why bias cannot be completely mitigated in Machine
Learning (ML) and proposes an end-to-end methodology to translate the ethical
principle of justice and fairness into the practice of ML development as an
ongoing agreement with stakeholders. The pro-ethical iterative process
presented in the paper aims to challenge asymmetric power dynamics in the
fairness decision making within ML design and support ML development teams to
identify, mitigate and monitor bias at each step of ML systems development. The
process also provides guidance on how to explain the always imperfect
trade-offs in terms of bias to users.
Related papers
- Safeguarding Autonomy: a Focus on Machine Learning Decision Systems [0.16385815610837165]
We focus on the different stages of the ML pipeline to identify the potential effects on ML end-users' autonomy.
We propose a related question for each detected impact, offering guidance for identifying possible focus points to respect ML end-users autonomy in decision-making.
arXiv Detail & Related papers (2025-03-27T22:31:16Z) - Contextual Fairness-Aware Practices in ML: A Cost-Effective Empirical Evaluation [48.943054662940916]
We investigate fairness-aware practices from two perspectives: contextual and cost-effectiveness.
Our findings provide insights into how context influences the effectiveness of fairness-aware practices.
This research aims to guide SE practitioners in selecting practices that achieve fairness with minimal performance costs.
arXiv Detail & Related papers (2025-03-19T18:10:21Z) - Causality Is Key to Understand and Balance Multiple Goals in Trustworthy ML and Foundation Models [91.24296813969003]
This paper advocates integrating causal methods into machine learning to navigate the trade-offs among key principles of trustworthy ML.
We argue that a causal approach is essential for balancing multiple competing objectives in both trustworthy ML and foundation models.
arXiv Detail & Related papers (2025-02-28T14:57:33Z) - Perceived Fairness of the Machine Learning Development Process: Concept Scale Development [0.0]
unfairness is triggered due to bias in the data, the data curation process, erroneous assumptions, and implicit bias rendered during the development process.
We propose operational attributes of perceived fairness to be transparency, accountability, and representativeness.
The multidimensional framework for perceived fairness offers a comprehensive understanding of perceived fairness.
arXiv Detail & Related papers (2025-01-23T06:51:31Z) - Unbiasing on the Fly: Explanation-Guided Human Oversight of Machine Learning System Decisions [4.24106429730184]
We propose a novel framework for on-the-fly tracking and correction of discrimination in deployed ML systems.
The framework continuously monitors the predictions made by an ML system and flags discriminatory outcomes.
This human-in-the-loop approach empowers reviewers to accept or override the ML system decision.
arXiv Detail & Related papers (2024-06-25T19:40:55Z) - The Impossibility of Fair LLMs [59.424918263776284]
The need for fair AI is increasingly clear in the era of large language models (LLMs)
We review the technical frameworks that machine learning researchers have used to evaluate fairness.
We develop guidelines for the more realistic goal of achieving fairness in particular use cases.
arXiv Detail & Related papers (2024-05-28T04:36:15Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - LanguageMPC: Large Language Models as Decision Makers for Autonomous
Driving [87.1164964709168]
This work employs Large Language Models (LLMs) as a decision-making component for complex autonomous driving scenarios.
Extensive experiments demonstrate that our proposed method not only consistently surpasses baseline approaches in single-vehicle tasks, but also helps handle complex driving behaviors even multi-vehicle coordination.
arXiv Detail & Related papers (2023-10-04T17:59:49Z) - Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects [21.133468554780404]
We focus on two-sided interactions, drawing on support spread across a diverse literature.
This catalogue can be helpful in raising awareness of the possible interactions between aspects of ethics principles.
arXiv Detail & Related papers (2023-04-17T13:43:13Z) - Assessing Perceived Fairness from Machine Learning Developer's
Perspective [0.0]
unfairness is triggered due to bias in the data, curation process, erroneous assumptions, and implicit bias rendered within the algorithmic development process.
In particular, ML developers have not been the focus of research relating to perceived fairness.
This paper performs an exploratory pilot study to assess the attributes of this construct using a systematic focus group of developers.
arXiv Detail & Related papers (2023-04-07T17:30:37Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - BeFair: Addressing Fairness in the Banking Sector [54.08949958349055]
We present the initial results of an industrial open innovation project in the banking sector.
We propose a general roadmap for fairness in ML and the implementation of a toolkit called BeFair that helps to identify and mitigate bias.
arXiv Detail & Related papers (2021-02-03T16:37:10Z) - Towards Integrating Fairness Transparently in Industrial Applications [3.478469381434812]
We propose a systematic approach to integrate mechanized and human-in-the-loop components in bias detection, mitigation, and documentation of Machine Learning projects.
We present our structural primitives with an example real-world use case on how it can be used to identify potential biases and determine appropriate mitigation strategies.
arXiv Detail & Related papers (2020-06-10T21:54:27Z) - On the Morality of Artificial Intelligence [154.69452301122175]
We propose conceptual and practical principles and guidelines for Machine Learning research and deployment.
We insist on concrete actions that can be taken by practitioners to pursue a more ethical and moral practice of ML aimed at using AI for social good.
arXiv Detail & Related papers (2019-12-26T23:06:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.