Towards the Socio-Algorithmic Construction of Fairness: The Case of Automatic Price-Surging in Ride-Hailing
- URL: http://arxiv.org/abs/2408.04685v1
- Date: Thu, 8 Aug 2024 09:11:12 GMT
- Title: Towards the Socio-Algorithmic Construction of Fairness: The Case of Automatic Price-Surging in Ride-Hailing
- Authors: Mateusz Dolata, Gerhard Schwabe,
- Abstract summary: We analyze the public discourse that emerged after a five-fold price-surge following the Brooklyn Subway Shooting.
Our results indicate that algorithms, even if not explicitly addressed in the discourse, strongly impact on constructing fairness assessments and notions.
We claim that the process of constructing notions of fairness is no longer just social; it has become a socio-algorithmic process.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Algorithms take decisions that affect humans, and have been shown to perpetuate biases and discrimination. Decisions by algorithms are subject to different interpretations. Algorithms' behaviors are basis for the construal of moral assessment and standards. Yet we lack an understanding of how algorithms impact on social construction processes, and vice versa. Without such understanding, social construction processes may be disrupted and, eventually, may impede moral progress in society. We analyze the public discourse that emerged after a significant (five-fold) price-surge following the Brooklyn Subway Shooting on April 12, 2022, in New York City. There was much controversy around the two ride-hailing firms' algorithms' decisions. The discussions evolved around various notions of fairness and the algorithms' decisions' justifiability. Our results indicate that algorithms, even if not explicitly addressed in the discourse, strongly impact on constructing fairness assessments and notions. They initiate the exchange, form people's expectations, evoke people's solidarity with specific groups, and are a vehicle for moral crusading. However, they are also subject to adjustments based on social forces. We claim that the process of constructing notions of fairness is no longer just social; it has become a socio-algorithmic process. We propose a theory of socio-algorithmic construction as a mechanism for establishing notions of fairness and other ethical constructs.
Related papers
- A theory of appropriateness with applications to generative artificial intelligence [56.23261221948216]
We need to understand how appropriateness guides human decision making in order to properly evaluate AI decision making and improve it.
This paper presents a theory of appropriateness: how it functions in human society, how it may be implemented in the brain, and what it means for responsible deployment of generative AI technology.
arXiv Detail & Related papers (2024-12-26T00:54:03Z) - The Odyssey of Commonsense Causality: From Foundational Benchmarks to Cutting-Edge Reasoning [70.16523526957162]
Understanding commonsense causality helps people understand the principles of the real world better.
Despite its significance, a systematic exploration of this topic is notably lacking.
Our work aims to provide a systematic overview, update scholars on recent advancements, and provide a pragmatic guide for beginners.
arXiv Detail & Related papers (2024-06-27T16:30:50Z) - Why Algorithms Remain Unjust: Power Structures Surrounding Algorithmic Activity [0.0]
Reformists have failed to curtail algorithmic injustice because they ignore the power structure surrounding algorithms.
I argue that the reason Algorithmic Activity is unequal, undemocratic, and unsustainable is that the power structure shaping it is one of economic empowerment rather than social empowerment.
arXiv Detail & Related papers (2024-05-28T17:49:24Z) - Fair Enough? A map of the current limitations of the requirements to have fair algorithms [43.609606707879365]
We argue that there is a hiatus between what the society is demanding from Automated Decision-Making systems, and what this demand actually means in real-world scenarios.
We outline the key features of such a hiatus and pinpoint a set of crucial open points that we as a society must address in order to give a concrete meaning to the increasing demand of fairness in Automated Decision-Making systems.
arXiv Detail & Related papers (2023-11-21T08:44:38Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Designing Equitable Algorithms [1.9006392177894293]
Predictive algorithms are now used to help distribute a large share of our society's resources and sanctions.
These algorithms can improve the efficiency and equity of decision-making.
But they could entrench and exacerbate disparities, particularly along racial, ethnic, and gender lines.
arXiv Detail & Related papers (2023-02-17T22:00:44Z) - Algorithmic Fairness and Structural Injustice: Insights from Feminist
Political Philosophy [2.28438857884398]
'Algorithmic fairness' aims to mitigate harmful biases in data-driven algorithms.
The perspectives of feminist political philosophers on social justice have been largely neglected.
This paper brings some key insights of feminist political philosophy to algorithmic fairness.
arXiv Detail & Related papers (2022-06-02T09:18:03Z) - A Sociotechnical View of Algorithmic Fairness [16.184328505946763]
Algorithmic fairness has been framed as a newly emerging technology that mitigates systemic discrimination in automated decision-making.
We argue that fairness is an inherently social concept and that technologies for algorithmic fairness should therefore be approached through a sociotechnical lens.
arXiv Detail & Related papers (2021-09-27T21:17:16Z) - Legal perspective on possible fairness measures - A legal discussion
using the example of hiring decisions (preprint) [0.0]
We explain the different kinds of fairness concepts that might be applicable for the specific application of hiring decisions.
We analyze their pros and cons with regard to the respective fairness interpretation and evaluate them from a legal perspective.
arXiv Detail & Related papers (2021-08-16T06:41:39Z) - Fairness Deconstructed: A Sociotechnical View of 'Fair' Algorithms in
Criminal Justice [0.0]
Machine learning researchers have developed methods for fairness, many of which rely on equalizing empirical metrics across protected attributes.
I argue that much of the fair ML fails to account for fairness issues with underlying crime data.
Instead of building AI that reifies power imbalances, I ask whether data science can be used to understand the root causes of structural marginalization.
arXiv Detail & Related papers (2021-06-25T06:52:49Z) - The zoo of Fairness metrics in Machine Learning [62.997667081978825]
In recent years, the problem of addressing fairness in Machine Learning (ML) and automatic decision-making has attracted a lot of attention.
A plethora of different definitions of fairness in ML have been proposed, that consider different notions of what is a "fair decision" in situations impacting individuals in the population.
In this work, we try to make some order out of this zoo of definitions.
arXiv Detail & Related papers (2021-06-01T13:19:30Z) - Learning from Learning Machines: Optimisation, Rules, and Social Norms [91.3755431537592]
It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making.
Recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
arXiv Detail & Related papers (2019-12-29T17:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.