What's Sex Got To Do With Fair Machine Learning?
- URL: http://arxiv.org/abs/2006.01770v2
- Date: Thu, 4 Jun 2020 22:54:18 GMT
- Title: What's Sex Got To Do With Fair Machine Learning?
- Authors: Lily Hu and Issa Kohler-Hausmann
- Abstract summary: We argue that many approaches to "fairness" require one to specify a causal model of the data generating process.
We show this by exploring the formal assumption of modularity in causal models.
We argue that this ontological picture is false. Many of the "effects" that sex purportedly "causes" are in fact features of sex as a social status.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Debate about fairness in machine learning has largely centered around
competing definitions of what fairness or nondiscrimination between groups
requires. However, little attention has been paid to what precisely a group is.
Many recent approaches to "fairness" require one to specify a causal model of
the data generating process. These exercises make an implicit ontological
assumption that a racial or sex group is simply a collection of individuals who
share a given trait. We show this by exploring the formal assumption of
modularity in causal models, which holds that the dependencies captured by one
causal pathway are invariant to interventions on any other pathways. Causal
models of sex propose two substantive claims: 1) There exists a feature,
sex-on-its-own, that is an inherent trait of an individual that causally brings
about social phenomena external to it in the world; and 2) the relations
between sex and its effects can be modified in whichever ways and the former
feature would still retain the meaning that sex has in our world. We argue that
this ontological picture is false. Many of the "effects" that sex purportedly
"causes" are in fact constitutive features of sex as a social status. They give
the social meaning of sex features, meanings that are precisely what make sex
discrimination a distinctively morally problematic type of action. Correcting
this conceptual error has a number of implications for how models can be used
to detect discrimination. Formal diagrams of constitutive relations present an
entirely different path toward reasoning about discrimination. Whereas causal
diagrams guide the construction of sophisticated modular counterfactuals,
constitutive diagrams identify a different kind of counterfactual as central to
an inquiry on discrimination: one that asks how the social meaning of a group
would be changed if its non-modular features were altered.
Related papers
- Distributional Semantics, Holism, and the Instability of Meaning [0.0]
A standard objection to meaning holism is the charge of instability.
In this article we examine whether the instability objection poses a problem for distributional models of meaning.
arXiv Detail & Related papers (2024-05-20T14:53:25Z) - The Causal Influence of Grammatical Gender on Distributional Semantics [87.8027818528463]
How much meaning influences gender assignment across languages is an active area of research in linguistics and cognitive science.
We offer a novel, causal graphical model that jointly represents the interactions between a noun's grammatical gender, its meaning, and adjective choice.
When we control for the meaning of the noun, the relationship between grammatical gender and adjective choice is near zero and insignificant.
arXiv Detail & Related papers (2023-11-30T13:58:13Z) - Fairness in AI Systems: Mitigating gender bias from language-vision
models [0.913755431537592]
We study the extent of the impact of gender bias in existing datasets.
We propose a methodology to mitigate its impact in caption based language vision models.
arXiv Detail & Related papers (2023-05-03T04:33:44Z) - Revealing Unfair Models by Mining Interpretable Evidence [50.48264727620845]
The popularity of machine learning has increased the risk of unfair models getting deployed in high-stake applications.
In this paper, we tackle the novel task of revealing unfair models by mining interpretable evidence.
Our method finds highly interpretable and solid evidence to effectively reveal the unfairness of trained models.
arXiv Detail & Related papers (2022-07-12T20:03:08Z) - fairadapt: Causal Reasoning for Fair Data Pre-processing [2.1915057426589746]
This manuscript describes the R-package fairadapt, which implements a causal inference pre-processing method.
We discuss appropriate relaxations which assume certain causal pathways from the sensitive attribute to the outcome are not discriminatory.
arXiv Detail & Related papers (2021-10-19T18:48:28Z) - Nested Counterfactual Identification from Arbitrary Surrogate
Experiments [95.48089725859298]
We study the identification of nested counterfactuals from an arbitrary combination of observations and experiments.
Specifically, we prove the counterfactual unnesting theorem (CUT), which allows one to map arbitrary nested counterfactuals to unnested ones.
arXiv Detail & Related papers (2021-07-07T12:51:04Z) - Fairness for Image Generation with Uncertain Sensitive Attributes [97.81354305427871]
This work tackles the issue of fairness in the context of generative procedures, such as image super-resolution.
While traditional group fairness definitions are typically defined with respect to specified protected groups, we emphasize that there are no ground truth identities.
We show that the natural extension of demographic parity is strongly dependent on the grouping, and emphimpossible to achieve obliviously.
arXiv Detail & Related papers (2021-06-23T06:17:17Z) - The zoo of Fairness metrics in Machine Learning [62.997667081978825]
In recent years, the problem of addressing fairness in Machine Learning (ML) and automatic decision-making has attracted a lot of attention.
A plethora of different definitions of fairness in ML have been proposed, that consider different notions of what is a "fair decision" in situations impacting individuals in the population.
In this work, we try to make some order out of this zoo of definitions.
arXiv Detail & Related papers (2021-06-01T13:19:30Z) - Aligning Faithful Interpretations with their Social Attribution [58.13152510843004]
We find that the requirement of model interpretations to be faithful is vague and incomplete.
We identify that the problem is a misalignment between the causal chain of decisions (causal attribution) and the attribution of human behavior to the interpretation (social attribution)
arXiv Detail & Related papers (2020-06-01T16:45:38Z) - Can gender inequality be created without inter-group discrimination? [0.0]
We test whether a simple agent-based dynamic process could create gender inequality.
We simulate a population who interact in pairs of randomly selected agents to influence each other about their esteem judgments of self and others.
Without prejudice, stereotypes, segregation, or categorization, our model produces inter-group inequality of self-esteem and status that is stable, consensual, and exhibits characteristics of glass ceiling effects.
arXiv Detail & Related papers (2020-05-05T07:33:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.