On the Moral Justification of Statistical Parity
- URL: http://arxiv.org/abs/2011.02079v2
- Date: Thu, 21 Jan 2021 12:39:36 GMT
- Title: On the Moral Justification of Statistical Parity
- Authors: Corinna Hertweck and Christoph Heitz and Michele Loi
- Abstract summary: A crucial but often neglected aspect of fairness is the question of how we justify enforcing a certain fairness metric from a moral perspective.
Our aim in this paper is to consider the moral aspects associated with the statistical fairness criterion of independence.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A crucial but often neglected aspect of algorithmic fairness is the question
of how we justify enforcing a certain fairness metric from a moral perspective.
When fairness metrics are proposed, they are typically argued for by
highlighting their mathematical properties. Rarely are the moral assumptions
beneath the metric explained. Our aim in this paper is to consider the moral
aspects associated with the statistical fairness criterion of independence
(statistical parity). To this end, we consider previous work, which discusses
the two worldviews "What You See Is What You Get" (WYSIWYG) and "We're All
Equal" (WAE) and by doing so provides some guidance for clarifying the possible
assumptions in the design of algorithms. We present an extension of this work,
which centers on morality. The most natural moral extension is that
independence needs to be fulfilled if and only if differences in predictive
features (e.g. high school grades and standardized test scores are predictive
of performance at university) between socio-demographic groups are caused by
unjust social disparities or measurement errors. Through two counterexamples,
we demonstrate that this extension is not universally true. This means that the
question of whether independence should be used or not cannot be satisfactorily
answered by only considering the justness of differences in the predictive
features.
Related papers
- Implementing Fairness: the view from a FairDream [0.0]
We train an AI model and develop our own fairness package FairDream to detect inequalities and then to correct for them.
Our experiments show that it is a property of FairDream to fulfill fairness objectives which are conditional on the ground truth.
arXiv Detail & Related papers (2024-07-20T06:06:24Z) - What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning [52.51430732904994]
In reinforcement learning problems, agents must consider long-term fairness while maximizing returns.
Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear.
We introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics.
arXiv Detail & Related papers (2024-04-16T22:47:59Z) - What Makes it Ok to Set a Fire? Iterative Self-distillation of Contexts
and Rationales for Disambiguating Defeasible Social and Moral Situations [48.686872351114964]
Moral or ethical judgments rely heavily on the specific contexts in which they occur.
We introduce defeasible moral reasoning: a task to provide grounded contexts that make an action more or less morally acceptable.
We distill a high-quality dataset of 1.2M entries of contextualizations and rationales for 115K defeasible moral actions.
arXiv Detail & Related papers (2023-10-24T00:51:29Z) - Reconciling Predictive and Statistical Parity: A Causal Approach [68.59381759875734]
We propose a new causal decomposition formula for the fairness measures associated with predictive parity.
We show that the notions of statistical and predictive parity are not really mutually exclusive, but complementary and spanning a spectrum of fairness notions.
arXiv Detail & Related papers (2023-06-08T09:23:22Z) - The Possibility of Fairness: Revisiting the Impossibility Theorem in
Practice [5.175941513195566]
We show that it is possible to identify a large set of models that satisfy seemingly incompatible fairness constraints.
We offer tools and guidance for practitioners to understand when -- and to what degree -- fairness along multiple criteria can be achieved.
arXiv Detail & Related papers (2023-02-13T13:29:24Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Identifiability of Causal-based Fairness Notions: A State of the Art [4.157415305926584]
Machine learning algorithms can produce biased outcome/prediction, typically, against minorities and under-represented sub-populations.
This paper is a compilation of the major identifiability results which are of particular relevance for machine learning fairness.
arXiv Detail & Related papers (2022-03-11T13:10:32Z) - Are There Exceptions to Goodhart's Law? On the Moral Justification of Fairness-Aware Machine Learning [14.428360876120333]
We argue that fairness measures are particularly sensitive to Goodhart's law.
We present a framework for moral reasoning about the justification of fairness metrics.
arXiv Detail & Related papers (2022-02-17T09:26:39Z) - Fairness Through Counterfactual Utilities [0.0]
Group fairness definitions such as Demographic Parity and Equal Opportunity make assumptions about the underlying decision-problem that restrict them to classification problems.
We provide a generalized set of group fairness definitions that unambiguously extend to all machine learning environments.
arXiv Detail & Related papers (2021-08-11T16:51:27Z) - A Weaker Faithfulness Assumption based on Triple Interactions [89.59955143854556]
We propose a weaker assumption that we call $2$-adjacency faithfulness.
We propose a sound orientation rule for causal discovery that applies under weaker assumptions.
arXiv Detail & Related papers (2020-10-27T13:04:08Z) - Fairness in machine learning: against false positive rate equality as a
measure of fairness [0.0]
Two popular fairness measures are calibration and equality of false positive rate.
I give an ethical framework for thinking about these measures and argue that false positive rate equality does not track anything about fairness.
arXiv Detail & Related papers (2020-07-06T17:03:58Z) - A Philosophy of Data [91.3755431537592]
We work from the fundamental properties necessary for statistical computation to a definition of statistical data.
We argue that the need for useful data to be commensurable rules out an understanding of properties as fundamentally unique or equal.
With our increasing reliance on data and data technologies, these two characteristics of data affect our collective conception of reality.
arXiv Detail & Related papers (2020-04-15T14:47:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.