Modeling Discrimination with Causal Abstraction
- URL: http://arxiv.org/abs/2501.08429v1
- Date: Tue, 14 Jan 2025 20:42:57 GMT
- Title: Modeling Discrimination with Causal Abstraction
- Authors: Milan Mossé, Kara Schechtman, Frederick Eberhardt, Thomas Icard,
- Abstract summary: A person is directly racially discriminated against only if her race caused her worse treatment.
This implies that race is an sufficiently separable from other attributes to isolate its causal role.
But race is embedded in a nexus of social factors that resist isolated treatment.
- Score: 4.9277400520479455
- License:
- Abstract: A person is directly racially discriminated against only if her race caused her worse treatment. This implies that race is an attribute sufficiently separable from other attributes to isolate its causal role. But race is embedded in a nexus of social factors that resist isolated treatment. If race is socially constructed, in what sense can it cause worse treatment? Some propose that the perception of race, rather than race itself, causes worse treatment. Others suggest that since causal models require modularity, i.e. the ability to isolate causal effects, attempts to causally model discrimination are misguided. This paper addresses the problem differently. We introduce a framework for reasoning about discrimination, in which race is a high-level abstraction of lower-level features. In this framework, race can be modeled as itself causing worse treatment. Modularity is ensured by allowing assumptions about social construction to be precisely and explicitly stated, via an alignment between race and its constituents. Such assumptions can then be subjected to normative and empirical challenges, which lead to different views of when discrimination occurs. By distinguishing constitutive and causal relations, the abstraction framework pinpoints disagreements in the current literature on modeling discrimination, while preserving a precise causal account of discrimination.
Related papers
- The Devil is in the Neurons: Interpreting and Mitigating Social Biases in Pre-trained Language Models [78.69526166193236]
Pre-trained Language models (PLMs) have been acknowledged to contain harmful information, such as social biases.
We propose sc Social Bias Neurons to accurately pinpoint units (i.e., neurons) in a language model that can be attributed to undesirable behavior, such as social bias.
As measured by prior metrics from StereoSet, our model achieves a higher degree of fairness while maintaining language modeling ability with low cost.
arXiv Detail & Related papers (2024-06-14T15:41:06Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Racial/Ethnic Categories in AI and Algorithmic Fairness: Why They Matter and What They Represent [0.0]
We show how racial categories with unclear assumptions and little justification can lead to varying datasets that poorly represent groups.
We also develop a framework, CIRCSheets, for documenting the choices and assumptions in choosing racial categories and the process of racialization into these categories.
arXiv Detail & Related papers (2024-04-10T04:04:05Z) - A New Paradigm for Counterfactual Reasoning in Fairness and Recourse [12.119272303766056]
The traditional paradigm for counterfactual reasoning in this literature is the interventional counterfactual.
An inherent limitation of this paradigm is that some demographic interventions may not translate into the formalisms of interventional counterfactuals.
In this work, we explore a new paradigm based instead on the backtracking counterfactual.
arXiv Detail & Related papers (2024-01-25T04:28:39Z) - An Empirical Analysis of Racial Categories in the Algorithmic Fairness
Literature [2.2713084727838115]
We analyze how race is conceptualized and formalized in algorithmic fairness frameworks.
We find that differing notions of race are adopted inconsistently, at times even within a single analysis.
We argue that the construction of racial categories is a value-laden process with significant social and political consequences.
arXiv Detail & Related papers (2023-09-12T21:23:29Z) - Revealing Unfair Models by Mining Interpretable Evidence [50.48264727620845]
The popularity of machine learning has increased the risk of unfair models getting deployed in high-stake applications.
In this paper, we tackle the novel task of revealing unfair models by mining interpretable evidence.
Our method finds highly interpretable and solid evidence to effectively reveal the unfairness of trained models.
arXiv Detail & Related papers (2022-07-12T20:03:08Z) - On Disentangled and Locally Fair Representations [95.6635227371479]
We study the problem of performing classification in a manner that is fair for sensitive groups, such as race and gender.
We learn a locally fair representation, such that, under the learned representation, the neighborhood of each sample is balanced in terms of the sensitive attribute.
arXiv Detail & Related papers (2022-05-05T14:26:50Z) - fairadapt: Causal Reasoning for Fair Data Pre-processing [2.1915057426589746]
This manuscript describes the R-package fairadapt, which implements a causal inference pre-processing method.
We discuss appropriate relaxations which assume certain causal pathways from the sensitive attribute to the outcome are not discriminatory.
arXiv Detail & Related papers (2021-10-19T18:48:28Z) - Fairness for Image Generation with Uncertain Sensitive Attributes [97.81354305427871]
This work tackles the issue of fairness in the context of generative procedures, such as image super-resolution.
While traditional group fairness definitions are typically defined with respect to specified protected groups, we emphasize that there are no ground truth identities.
We show that the natural extension of demographic parity is strongly dependent on the grouping, and emphimpossible to achieve obliviously.
arXiv Detail & Related papers (2021-06-23T06:17:17Z) - What's Sex Got To Do With Fair Machine Learning? [0.0]
We argue that many approaches to "fairness" require one to specify a causal model of the data generating process.
We show this by exploring the formal assumption of modularity in causal models.
We argue that this ontological picture is false. Many of the "effects" that sex purportedly "causes" are in fact features of sex as a social status.
arXiv Detail & Related papers (2020-06-02T16:51:39Z) - Aligning Faithful Interpretations with their Social Attribution [58.13152510843004]
We find that the requirement of model interpretations to be faithful is vague and incomplete.
We identify that the problem is a misalignment between the causal chain of decisions (causal attribution) and the attribution of human behavior to the interpretation (social attribution)
arXiv Detail & Related papers (2020-06-01T16:45:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.