Artificial Intelligence and Structural Injustice: Foundations for
Equity, Values, and Responsibility
- URL: http://arxiv.org/abs/2205.02389v1
- Date: Thu, 5 May 2022 01:21:47 GMT
- Title: Artificial Intelligence and Structural Injustice: Foundations for
Equity, Values, and Responsibility
- Authors: Johannes Himmelreich and D\'esir\'ee Lim
- Abstract summary: This chapter argues for a structural injustice approach to the governance of AI.
The analytical component consists of structural explanations that are well-known in the social sciences.
The evaluative component is a theory of justice.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This chapter argues for a structural injustice approach to the governance of
AI. Structural injustice has an analytical and an evaluative component. The
analytical component consists of structural explanations that are well-known in
the social sciences. The evaluative component is a theory of justice.
Structural injustice is a powerful conceptual tool that allows researchers and
practitioners to identify, articulate, and perhaps even anticipate, AI biases.
The chapter begins with an example of racial bias in AI that arises from
structural injustice. The chapter then presents the concept of structural
injustice as introduced by the philosopher Iris Marion Young. The chapter
moreover argues that structural injustice is well suited as an approach to the
governance of AI and compares this approach to alternative approaches that
start from analyses of harms and benefits or from value statements. The chapter
suggests that structural injustice provides methodological and normative
foundations for the values and concerns of Diversity, Equity, and Inclusion.
The chapter closes with an outlook onto the idea of structure and on
responsibility. The idea of a structure is central to justice. An open
theoretical research question is to what extent AI is itself part of the
structure of society. Finally, the practice of responsibility is central to
structural injustice. Even if they cannot be held responsible for the existence
of structural injustice, every individual and every organization has some
responsibility to address structural injustice going forward.
Related papers
- Epistemic Scarcity: The Economics of Unresolvable Unknowns [0.0]
We argue that AI systems are incapable of performing the core functions of economic coordination.<n>We critique dominant ethical AI frameworks as extensions of constructivist rationalism.
arXiv Detail & Related papers (2025-07-02T08:46:24Z) - Naming is framing: How cybersecurity's language problems are repeating in AI governance [0.0]
This paper argues that misnomers like cybersecurity and artificial intelligence (AI) are more than semantic quirks.
It argues that these misnomers carry significant governance risks by obscuring human agency, inflating expectations, and distorting accountability.
The paper advocates for a language-first approach to AI governance: one that interrogates dominant metaphors, foregrounds human roles, and co-develops a lexicon that is precise, inclusive, and reflexive.
arXiv Detail & Related papers (2025-04-16T20:58:26Z) - Authoritarian Recursions: How Fiction, History, and AI Reinforce Control in Education, Warfare, and Discourse [0.0]
Article theorizes how AI systems consolidate institutional control across education, warfare, and digital discourse.<n>Case studies are analyzed alongside cultural imaginaries such as Orwell's textitNineteen Eighty-Four, Skynet, and textitBlack Mirror, used as tools to surface ethical blind spots.
arXiv Detail & Related papers (2025-04-12T01:01:26Z) - Agency Is Frame-Dependent [94.91580596320331]
Agency is a system's capacity to steer outcomes toward a goal.
We argue that agency is fundamentally frame-dependent.
We conclude that any basic science of agency requires frame-dependence.
arXiv Detail & Related papers (2025-02-06T08:34:57Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Towards the Socio-Algorithmic Construction of Fairness: The Case of Automatic Price-Surging in Ride-Hailing [0.0]
We analyze the public discourse that emerged after a five-fold price-surge following the Brooklyn Subway Shooting.
Our results indicate that algorithms, even if not explicitly addressed in the discourse, strongly impact on constructing fairness assessments and notions.
We claim that the process of constructing notions of fairness is no longer just social; it has become a socio-algorithmic process.
arXiv Detail & Related papers (2024-08-08T09:11:12Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Does Explainable AI Have Moral Value? [0.0]
Explainable AI (XAI) aims to bridge the gap between complex algorithmic systems and human stakeholders.
Current discourse often examines XAI in isolation as either a technological tool, user interface, or policy mechanism.
This paper proposes a unifying ethical framework grounded in moral duties and the concept of reciprocity.
arXiv Detail & Related papers (2023-11-05T15:59:27Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Relational Artificial Intelligence [5.5586788751870175]
Even though AI is traditionally associated with rational decision making, understanding and shaping the societal impact of AI in all its facets requires a relational perspective.
A rational approach to AI, where computational algorithms drive decision making independent of human intervention, has shown to result in bias and exclusion.
A relational approach, that focus on the relational nature of things, is needed to deal with the ethical, legal, societal, cultural, and environmental implications of AI.
arXiv Detail & Related papers (2022-02-04T15:29:57Z) - Toward a Theory of Justice for Artificial Intelligence [2.28438857884398]
It holds that the basic structure of society should be understood as a composite of socio-technical systems.
As a consequence, egalitarian norms of justice apply to the technology when it is deployed in these contexts.
arXiv Detail & Related papers (2021-10-27T13:23:38Z) - Fairness Deconstructed: A Sociotechnical View of 'Fair' Algorithms in
Criminal Justice [0.0]
Machine learning researchers have developed methods for fairness, many of which rely on equalizing empirical metrics across protected attributes.
I argue that much of the fair ML fails to account for fairness issues with underlying crime data.
Instead of building AI that reifies power imbalances, I ask whether data science can be used to understand the root causes of structural marginalization.
arXiv Detail & Related papers (2021-06-25T06:52:49Z) - Aligning Faithful Interpretations with their Social Attribution [58.13152510843004]
We find that the requirement of model interpretations to be faithful is vague and incomplete.
We identify that the problem is a misalignment between the causal chain of decisions (causal attribution) and the attribution of human behavior to the interpretation (social attribution)
arXiv Detail & Related papers (2020-06-01T16:45:38Z) - On Consequentialism and Fairness [64.35872952140677]
We provide a consequentialist critique of common definitions of fairness within machine learning.
We conclude with a broader discussion of the issues of learning and randomization.
arXiv Detail & Related papers (2020-01-02T05:39:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.