A toolkit of dilemmas: Beyond debiasing and fairness formulas for
responsible AI/ML
- URL: http://arxiv.org/abs/2303.01930v1
- Date: Fri, 3 Mar 2023 13:58:24 GMT
- Title: A toolkit of dilemmas: Beyond debiasing and fairness formulas for
responsible AI/ML
- Authors: Andr\'es Dom\'inguez Hern\'andez and Vassilis Galanos
- Abstract summary: Approaches to fair and ethical AI have recently fallen under the scrutiny of the emerging field of critical data studies.
This paper advocates for a situated reasoning and creative engagement with the dilemmas surrounding responsible algorithmic/data-driven systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Approaches to fair and ethical AI have recently fell under the scrutiny of
the emerging, chiefly qualitative, field of critical data studies, placing
emphasis on the lack of sensitivity to context and complex social phenomena of
such interventions. We employ some of these lessons to introduce a tripartite
decision-making toolkit, informed by dilemmas encountered in the pursuit of
responsible AI/ML. These are: (a) the opportunity dilemma between the
availability of data shaping problem statements vs problem statements shaping
data; (b) the trade-off between scalability and contextualizability (too much
data versus too specific data); and (c) the epistemic positioning between the
pragmatic technical objectivism and the reflexive relativism in acknowledging
the social. This paper advocates for a situated reasoning and creative
engagement with the dilemmas surrounding responsible algorithmic/data-driven
systems, and going beyond the formulaic bias elimination and ethics
operationalization narratives found in the fair-AI literature.
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Advancing Fairness in Natural Language Processing: From Traditional Methods to Explainability [0.9065034043031668]
The thesis addresses the need for equity and transparency in NLP systems.
It introduces an innovative algorithm to mitigate biases in high-risk NLP applications.
It also presents a model-agnostic explainability method that identifies and ranks concepts in Transformer models.
arXiv Detail & Related papers (2024-10-16T12:38:58Z) - FairAIED: Navigating Fairness, Bias, and Ethics in Educational AI Applications [2.612585751318055]
The integration of Artificial Intelligence into education has transformative potential, providing tailored learning experiences and creative instructional approaches.
However, the inherent biases in AI algorithms hinder this improvement by unintentionally perpetuating prejudice against specific demographics.
This survey delves deeply into the developing topic of algorithmic fairness in educational contexts.
It identifies the common forms of biases, such as data-related, algorithmic, and user-interaction, that fundamentally undermine the accomplishment of fairness in AI teaching aids.
arXiv Detail & Related papers (2024-07-26T13:59:20Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - On the meaning of uncertainty for ethical AI: philosophy and practice [10.591284030838146]
We argue that this is a significant way to bring ethical considerations into mathematical reasoning.
We demonstrate these ideas within the context of competing models used to advise the UK government on the spread of the Omicron variant of COVID-19 during December 2021.
arXiv Detail & Related papers (2023-09-11T15:13:36Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Competency Problems: On Finding and Removing Artifacts in Language Data [50.09608320112584]
We argue that for complex language understanding tasks, all simple feature correlations are spurious.
We theoretically analyze the difficulty of creating data for competency problems when human bias is taken into account.
arXiv Detail & Related papers (2021-04-17T21:34:10Z) - No computation without representation: Avoiding data and algorithm
biases through diversity [11.12971845021808]
We draw connections between the lack of diversity within academic and professional computing fields and the type and breadth of the biases encountered in datasets.
We use these lessons to develop recommendations that provide concrete steps for the computing community to increase diversity.
arXiv Detail & Related papers (2020-02-26T23:07:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.