The AI Fairness Myth: A Position Paper on Context-Aware Bias
- URL: http://arxiv.org/abs/2505.00965v1
- Date: Fri, 02 May 2025 02:47:32 GMT
- Title: The AI Fairness Myth: A Position Paper on Context-Aware Bias
- Authors: Kessia Nepomuceno, Fabio Petrillo,
- Abstract summary: We argue that fairness sometimes requires deliberate, context-aware preferential treatment of historically marginalized groups.<n>Rather than viewing bias solely as a flaw to eliminate, we propose a framework that embraces corrective, intentional biases.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Defining fairness in AI remains a persistent challenge, largely due to its deeply context-dependent nature and the lack of a universal definition. While numerous mathematical formulations of fairness exist, they sometimes conflict with one another and diverge from social, economic, and legal understandings of justice. Traditional quantitative definitions primarily focus on statistical comparisons, but they often fail to simultaneously satisfy multiple fairness constraints. Drawing on philosophical theories (Rawls' Difference Principle and Dworkin's theory of equality) and empirical evidence supporting affirmative action, we argue that fairness sometimes necessitates deliberate, context-aware preferential treatment of historically marginalized groups. Rather than viewing bias solely as a flaw to eliminate, we propose a framework that embraces corrective, intentional biases to promote genuine equality of opportunity. Our approach involves identifying unfairness, recognizing protected groups/individuals, applying corrective strategies, measuring impact, and iterating improvements. By bridging mathematical precision with ethical and contextual considerations, we advocate for an AI fairness paradigm that goes beyond neutrality to actively advance social justice.
Related papers
- Defining bias in AI-systems: Biased models are fair models [2.8360662552057327]
We argue that a precise conceptualization of bias is necessary to effectively address fairness concerns.<n>Rather than viewing bias as inherently negative or unfair, we highlight the importance of distinguishing between bias and discrimination.
arXiv Detail & Related papers (2025-02-25T10:28:16Z) - Implementing Fairness in AI Classification: The Role of Explainability [0.0]
We argue that implementing fairness in AI classification involves more work than just operationalizing a fairness metric.<n>We make the training processes transparent, determining what outcomes the fairness criteria actually produce, and assessing their trade-offs.<n>We draw conclusions regarding the way in which these explanatory steps can make an AI model trustworthy.
arXiv Detail & Related papers (2024-07-20T06:06:24Z) - What's Distributive Justice Got to Do with It? Rethinking Algorithmic Fairness from the Perspective of Approximate Justice [1.8434042562191815]
We argue that in the context of imperfect decision-making systems, we should not only care about what the ideal distribution of benefits/harms among individuals would look like.
This requires us to rethink the way in which we, as algorithmic fairness researchers, view distributive justice and use fairness criteria.
arXiv Detail & Related papers (2024-07-17T11:13:23Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.<n>We show that enforcing a causal constraint often reduces the disparity between demographic groups.<n>We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Assessing Group Fairness with Social Welfare Optimization [0.9217021281095907]
This paper explores whether a broader conception of social justice, based on optimizing a social welfare function, can be useful for assessing various definitions of parity.
We show that it can justify demographic parity or equalized odds under certain conditions, but frequently requires a departure from these types of parity.
In addition, we find that predictive rate parity is of limited usefulness.
arXiv Detail & Related papers (2024-05-19T01:41:04Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Fair Enough: Standardizing Evaluation and Model Selection for Fairness
Research in NLP [64.45845091719002]
Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct.
This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning.
arXiv Detail & Related papers (2023-02-11T14:54:00Z) - Fairness for Image Generation with Uncertain Sensitive Attributes [97.81354305427871]
This work tackles the issue of fairness in the context of generative procedures, such as image super-resolution.
While traditional group fairness definitions are typically defined with respect to specified protected groups, we emphasize that there are no ground truth identities.
We show that the natural extension of demographic parity is strongly dependent on the grouping, and emphimpossible to achieve obliviously.
arXiv Detail & Related papers (2021-06-23T06:17:17Z) - Towards the Right Kind of Fairness in AI [3.723553383515688]
"Fairness Compass" is a tool which makes identifying the most appropriate fairness metric for a given system a simple, straightforward procedure.
We argue that documenting the reasoning behind the respective decisions in the course of this process can help to build trust from the user.
arXiv Detail & Related papers (2021-02-16T21:12:30Z) - Algorithmic Decision Making with Conditional Fairness [48.76267073341723]
We define conditional fairness as a more sound fairness metric by conditioning on the fairness variables.
We propose a Derivable Conditional Fairness Regularizer (DCFR) to track the trade-off between precision and fairness of algorithmic decision making.
arXiv Detail & Related papers (2020-06-18T12:56:28Z) - Statistical Equity: A Fairness Classification Objective [6.174903055136084]
We propose a new fairness definition motivated by the principle of equity.
We formalize our definition of fairness, and motivate it with its appropriate contexts.
We perform multiple automatic and human evaluations to show the effectiveness of our definition.
arXiv Detail & Related papers (2020-05-14T23:19:38Z) - On Consequentialism and Fairness [64.35872952140677]
We provide a consequentialist critique of common definitions of fairness within machine learning.
We conclude with a broader discussion of the issues of learning and randomization.
arXiv Detail & Related papers (2020-01-02T05:39:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.