Secondary Bounded Rationality: A Theory of How Algorithms Reproduce Structural Inequality in AI Hiring
- URL: http://arxiv.org/abs/2507.09233v2
- Date: Tue, 22 Jul 2025 07:25:59 GMT
- Title: Secondary Bounded Rationality: A Theory of How Algorithms Reproduce Structural Inequality in AI Hiring
- Authors: Jia Xiao,
- Abstract summary: Article argues that AI systems inherit and amplify human cognitive and structural biases through technical and sociopolitical constraints.<n>We show how algorithmic processes transform historical inequalities, such as elite credential privileging and network homophily, into ostensibly meritocratic outcomes.<n>We propose mitigation strategies, including counterfactual fairness testing, capital-aware auditing, and regulatory interventions, to disrupt this self-reinforcing inequality.
- Score: 0.174048653626208
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI-driven recruitment systems, while promising efficiency and objectivity, often perpetuate systemic inequalities by encoding cultural and social capital disparities into algorithmic decision making. This article develops and defends a novel theory of secondary bounded rationality, arguing that AI systems, despite their computational power, inherit and amplify human cognitive and structural biases through technical and sociopolitical constraints. Analyzing multimodal recruitment frameworks, we demonstrate how algorithmic processes transform historical inequalities, such as elite credential privileging and network homophily, into ostensibly meritocratic outcomes. Using Bourdieusian capital theory and Simon's bounded rationality, we reveal a recursive cycle where AI entrenches exclusion by optimizing for legible yet biased proxies of competence. We propose mitigation strategies, including counterfactual fairness testing, capital-aware auditing, and regulatory interventions, to disrupt this self-reinforcing inequality.
Related papers
- Resource Rational Contractualism Should Guide AI Alignment [69.07915246220985]
Contractualist alignment proposes grounding decisions in agreements that diverse stakeholders would endorse.<n>We propose Resource-Rationalism: a framework where AI systems approximate the agreements rational parties would form.<n>An RRC-aligned agent would not only operate efficiently, but also be equipped to dynamically adapt to and interpret the ever-changing human social world.
arXiv Detail & Related papers (2025-06-20T18:57:13Z) - Authoritarian Recursions: How Fiction, History, and AI Reinforce Control in Education, Warfare, and Discourse [0.0]
Article theorizes how artificial intelligence systems consolidate institutional control across education, military operations, and digital discourse.<n>Analyses how intelligent systems normalize hierarchy under the guise of efficiency and neutrality.<n>Case studies include automated proctoring in education, autonomous targeting in warfare, and algorithmic curation on social platforms.
arXiv Detail & Related papers (2025-04-12T01:01:26Z) - AI and the Transformation of Accountability and Discretion in Urban Governance [1.9152655229960793]
The study synthesizes insights to propose guiding principles for responsible AI integration in decision-making processes.<n>The analysis argues that AI does not simply restrict or enhance discretion but redistributes it across institutional levels.<n>It may simultaneously strengthen managerial oversight, enhance decision-making consistency, and improve operational efficiency.
arXiv Detail & Related papers (2025-02-18T18:11:39Z) - Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Exploiting the Margin: How Capitalism Fuels AI at the Expense of Minoritized Groups [0.0]
This paper explores the relationship between capitalism, racial injustice, and artificial intelligence (AI)
It argues that AI acts as a contemporary vehicle for age-old forms of exploitation.
The paper promotes an approach that integrates social justice and equity into the core of technological design and policy.
arXiv Detail & Related papers (2024-03-10T22:40:07Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Distributive Justice and Fairness Metrics in Automated Decision-making:
How Much Overlap Is There? [0.0]
We show that metrics implementing equality of opportunity only apply when resource allocations are based on deservingness, but fail when allocations should reflect concerns about egalitarianism, sufficiency, and priority.
We argue that by cleanly distinguishing between prediction tasks and decision tasks, research on fair machine learning could take better advantage of the rich literature on distributive justice.
arXiv Detail & Related papers (2021-05-04T12:09:26Z) - Fairness Perceptions of Algorithmic Decision-Making: A Systematic Review
of the Empirical Literature [0.0]
Algorithmic decision-making (ADM) increasingly shapes people's daily lives.
A human-centric approach demanded by scholars and policymakers requires taking people's fairness perceptions into account.
We provide a comprehensive, systematic literature review of the existing empirical insights on perceptions of algorithmic fairness.
arXiv Detail & Related papers (2021-03-22T17:12:45Z) - Conservative AI and social inequality: Conceptualizing alternatives to
bias through social theory [0.0]
Societal issues can no longer be out of scope for AI and machine learning, given the impact of these systems on human lives.
Conservatism refers to dominant tendencies that reproduce and strengthen the status quo, while radical approaches work to disrupt systemic forms of inequality.
This requires engagement with a growing body of critical AI scholarship that goes beyond biased data to analyze structured ways of perpetuating inequality.
arXiv Detail & Related papers (2020-07-16T21:52:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.