The Use and Misuse of Counterfactuals in Ethical Machine Learning
- URL: http://arxiv.org/abs/2102.05085v1
- Date: Tue, 9 Feb 2021 19:28:41 GMT
- Title: The Use and Misuse of Counterfactuals in Ethical Machine Learning
- Authors: Atoosa Kasirzadeh, Andrew Smart
- Abstract summary: We argue for more caution with the use of counterfactuals when the facts to be considered are social categories such as race or gender.
We conclude that the counterfactual approach in machine learning fairness and social explainability can require an incoherent theory of what social categories are.
- Score: 2.28438857884398
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The use of counterfactuals for considerations of algorithmic fairness and
explainability is gaining prominence within the machine learning community and
industry. This paper argues for more caution with the use of counterfactuals
when the facts to be considered are social categories such as race or gender.
We review a broad body of papers from philosophy and social sciences on social
ontology and the semantics of counterfactuals, and we conclude that the
counterfactual approach in machine learning fairness and social explainability
can require an incoherent theory of what social categories are. Our findings
suggest that most often the social categories may not admit counterfactual
manipulation, and hence may not appropriately satisfy the demands for
evaluating the truth or falsity of counterfactuals. This is important because
the widespread use of counterfactuals in machine learning can lead to
misleading results when applied in high-stakes domains. Accordingly, we argue
that even though counterfactuals play an essential part in some causal
inferences, their use for questions of algorithmic fairness and social
explanations can create more problems than they resolve. Our positive result is
a set of tenets about using counterfactuals for fairness and explanations in
machine learning.
Related papers
- Longitudinal Counterfactuals: Constraints and Opportunities [59.11233767208572]
We propose using longitudinal data to assess and improve plausibility in counterfactuals.
We develop a metric that compares longitudinal differences to counterfactual differences, allowing us to evaluate how similar a counterfactual is to prior observed changes.
arXiv Detail & Related papers (2024-02-29T20:17:08Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Identifiability of Causal-based Fairness Notions: A State of the Art [4.157415305926584]
Machine learning algorithms can produce biased outcome/prediction, typically, against minorities and under-represented sub-populations.
This paper is a compilation of the major identifiability results which are of particular relevance for machine learning fairness.
arXiv Detail & Related papers (2022-03-11T13:10:32Z) - The Fairness Field Guide: Perspectives from Social and Formal Sciences [16.53498469585148]
There is a critical lack of literature that explains the interplay of fair machine learning with philosophy, sociology, and law.
We give the mathematical and algorithmic backgrounds of several popular statistical and causal-based fair machine learning methods.
We explore several criticisms of the current approaches to fair machine learning from sociological and philosophical viewpoints.
arXiv Detail & Related papers (2022-01-13T21:30:03Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - Fairness in Machine Learning [15.934879442202785]
We show how causal Bayesian networks can play an important role to reason about and deal with fairness.
We present a unified framework that encompasses methods that can deal with different settings and fairness criteria.
arXiv Detail & Related papers (2020-12-31T18:38:58Z) - Counterfactual Explanations for Machine Learning: A Review [5.908471365011942]
We review and categorize research on counterfactual explanations in machine learning.
Modern approaches to counterfactual explainability in machine learning draw connections to the established legal doctrine in many countries.
arXiv Detail & Related papers (2020-10-20T20:08:42Z) - Theory In, Theory Out: The uses of social theory in machine learning for
social science [3.180013942295509]
We show how social theory can be used to answer the basic methodological and interpretive questions that arise at each stage of the machine learning pipeline.
We believe this paper can act as a guide for computer and social scientists alike to navigate the substantive questions involved in applying the tools of machine learning to social data.
arXiv Detail & Related papers (2020-01-09T20:04:25Z) - On Consequentialism and Fairness [64.35872952140677]
We provide a consequentialist critique of common definitions of fairness within machine learning.
We conclude with a broader discussion of the issues of learning and randomization.
arXiv Detail & Related papers (2020-01-02T05:39:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.