A Sociotechnical View of Algorithmic Fairness
- URL: http://arxiv.org/abs/2110.09253v1
- Date: Mon, 27 Sep 2021 21:17:16 GMT
- Title: A Sociotechnical View of Algorithmic Fairness
- Authors: Mateusz Dolata and Stefan Feuerriegel and Gerhard Schwabe
- Abstract summary: Algorithmic fairness has been framed as a newly emerging technology that mitigates systemic discrimination in automated decision-making.
We argue that fairness is an inherently social concept and that technologies for algorithmic fairness should therefore be approached through a sociotechnical lens.
- Score: 16.184328505946763
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Algorithmic fairness has been framed as a newly emerging technology that
mitigates systemic discrimination in automated decision-making, providing
opportunities to improve fairness in information systems (IS). However, based
on a state-of-the-art literature review, we argue that fairness is an
inherently social concept and that technologies for algorithmic fairness should
therefore be approached through a sociotechnical lens. We advance the discourse
on algorithmic fairness as a sociotechnical phenomenon. Our research objective
is to embed AF in the sociotechnical view of IS. Specifically, we elaborate on
why outcomes of a system that uses algorithmic means to assure fairness depends
on mutual influences between technical and social structures. This perspective
can generate new insights that integrate knowledge from both technical fields
and social studies. Further, it spurs new directions for IS debates. We
contribute as follows: First, we problematize fundamental assumptions in the
current discourse on algorithmic fairness based on a systematic analysis of 310
articles. Second, we respond to these assumptions by theorizing algorithmic
fairness as a sociotechnical construct. Third, we propose directions for IS
researchers to enhance their impacts by pursuing a unique understanding of
sociotechnical algorithmic fairness. We call for and undertake a holistic
approach to AF. A sociotechnical perspective on algorithmic fairness can yield
holistic solutions to systemic biases and discrimination.
Related papers
- Algorithmic Fairness: A Tolerance Perspective [31.882207568746168]
This survey delves into the existing literature on algorithmic fairness, specifically highlighting its multifaceted social consequences.
We introduce a novel taxonomy based on 'tolerance', a term we define as the degree to which variations in fairness outcomes are acceptable.
Our systematic review covers diverse industries, revealing critical insights into the balance between algorithmic decision making and social equity.
arXiv Detail & Related papers (2024-04-26T08:16:54Z) - Algorithms as Social-Ecological-Technological Systems: an Environmental
Justice Lens on Algorithmic Audits [0.5076419064097732]
This paper reframes algorithmic systems as intimately connected to and part of social and ecological systems.
We propose a first-of-its-kind methodology for environmental justice-oriented algorithmic audits.
arXiv Detail & Related papers (2023-05-09T19:25:25Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Fairness Deconstructed: A Sociotechnical View of 'Fair' Algorithms in
Criminal Justice [0.0]
Machine learning researchers have developed methods for fairness, many of which rely on equalizing empirical metrics across protected attributes.
I argue that much of the fair ML fails to account for fairness issues with underlying crime data.
Instead of building AI that reifies power imbalances, I ask whether data science can be used to understand the root causes of structural marginalization.
arXiv Detail & Related papers (2021-06-25T06:52:49Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Fair Representation Learning for Heterogeneous Information Networks [35.80367469624887]
We propose a comprehensive set of de-biasing methods for fair HINs representation learning.
We study the behavior of these algorithms, especially their capability in balancing the trade-off between fairness and prediction accuracy.
We evaluate the performance of the proposed methods in an automated career counseling application.
arXiv Detail & Related papers (2021-04-18T08:28:18Z) - The FairCeptron: A Framework for Measuring Human Perceptions of
Algorithmic Fairness [1.4449464910072918]
The FairCeptron framework is an approach for studying perceptions of fairness in algorithmic decision making such as in ranking or classification.
The framework includes fairness scenario generation, fairness perception elicitation and fairness perception analysis.
An implementation of the FairCeptron framework is openly available, and it can easily be adapted to study perceptions of algorithmic fairness in other application contexts.
arXiv Detail & Related papers (2021-02-08T10:47:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.