Strategic Costs of Perceived Bias in Fair Selection
- URL: http://arxiv.org/abs/2510.20606v1
- Date: Thu, 23 Oct 2025 14:38:05 GMT
- Title: Strategic Costs of Perceived Bias in Fair Selection
- Authors: L. Elisa Celis, Lingxiao Huang, Milind Sohoni, Nisheeth K. Vishnoi,
- Abstract summary: Meritocratic systems aim to impartially reward skill and effort.<n> persistent disparities across race, gender, and class challenge this ideal.<n>We develop a game-theoretic model in which candidates from different socioeconomic groups differ in perceived post-selection value.
- Score: 25.24305795734348
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Meritocratic systems, from admissions to hiring, aim to impartially reward skill and effort. Yet persistent disparities across race, gender, and class challenge this ideal. Some attribute these gaps to structural inequality; others to individual choice. We develop a game-theoretic model in which candidates from different socioeconomic groups differ in their perceived post-selection value--shaped by social context and, increasingly, by AI-powered tools offering personalized career or salary guidance. Each candidate strategically chooses effort, balancing its cost against expected reward; effort translates into observable merit, and selection is based solely on merit. We characterize the unique Nash equilibrium in the large-agent limit and derive explicit formulas showing how valuation disparities and institutional selectivity jointly determine effort, representation, social welfare, and utility. We further propose a cost-sensitive optimization framework that quantifies how modifying selectivity or perceived value can reduce disparities without compromising institutional goals. Our analysis reveals a perception-driven bias: when perceptions of post-selection value differ across groups, these differences translate into rational differences in effort, propagating disparities backward through otherwise "fair" selection processes. While the model is static, it captures one stage of a broader feedback cycle linking perceptions, incentives, and outcome--bridging rational-choice and structural explanations of inequality by showing how techno-social environments shape individual incentives in meritocratic systems.
Related papers
- Algorithmic Approaches to Opinion Selection for Online Deliberation: A Comparative Study [1.5267938856942276]
In online deliberation platforms, algorithmic selection is increasingly used to automate the selection process.<n>It remains unclear how each approach influences desired democratic criteria such as proportional representation.<n>We propose a novel algorithm that incorporates both diversity and a balanced notion of representation in the selection strategy.
arXiv Detail & Related papers (2026-02-17T09:03:26Z) - Fairness in Opinion Dynamics [0.7340017786387767]
We study how a state-of-the-art model discriminates certain minority groups and whether it is possible to reliably predict for whom it will perform worse.<n>Our work explores how three classifier models (Demography-Based, Topology-Based, and Hybrid) perform when assessing for whom this algorithm will provide inaccurate predictions.<n>We conclude that a multi-faceted approach, incorporating both individual attributes and network structures, is essential for reducing algorithmic bias.
arXiv Detail & Related papers (2026-01-07T12:15:02Z) - The AI Fairness Myth: A Position Paper on Context-Aware Bias [0.0]
We argue that fairness sometimes requires deliberate, context-aware preferential treatment of historically marginalized groups.<n>Rather than viewing bias solely as a flaw to eliminate, we propose a framework that embraces corrective, intentional biases.
arXiv Detail & Related papers (2025-05-02T02:47:32Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Social Diversity Reduces the Complexity and Cost of Fostering Fairness [63.70639083665108]
We investigate the effects of interference mechanisms which assume incomplete information and flexible standards of fairness.
We quantify the role of diversity and show how it reduces the need for information gathering.
Our results indicate that diversity changes and opens up novel mechanisms available to institutions wishing to promote fairness.
arXiv Detail & Related papers (2022-11-18T21:58:35Z) - Fairness in Selection Problems with Strategic Candidates [9.4148805532663]
We study how the strategic aspect affects fairness in selection problems.
A population of rational candidates compete by choosing an effort level to increase their quality.
We characterize the (unique) equilibrium of this game in the different parameters' regimes.
arXiv Detail & Related papers (2022-05-24T17:03:32Z) - Addressing Strategic Manipulation Disparities in Fair Classification [15.032416453073086]
We show that individuals from minority groups often pay a higher cost to update their features.
We propose a constrained optimization framework that constructs classifiers that lower the strategic manipulation cost for minority groups.
Empirically, we show the efficacy of this approach over multiple real-world datasets.
arXiv Detail & Related papers (2022-05-22T14:59:40Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - The Price of Diversity [3.136861161060885]
Systemic bias with respect to gender, race and ethnicity, often unconscious, is prevalent in datasets involving choices among individuals.
We propose a novel optimization approach based on optimally flipping outcome labels and training classification models.
We present case studies on three real-world datasets consisting of parole, admissions to the bar and lending decisions.
arXiv Detail & Related papers (2021-07-03T02:23:27Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - End-to-End Learning and Intervention in Games [60.41921763076017]
We provide a unified framework for learning and intervention in games.
We propose two approaches, respectively based on explicit and implicit differentiation.
The analytical results are validated using several real-world problems.
arXiv Detail & Related papers (2020-10-26T18:39:32Z) - On Fair Selection in the Presence of Implicit Variance [17.517529275692322]
We argue that even in the absence of implicit bias, the estimates of candidates' quality from different groups may differ in another fundamental way, namely, in their variance.
We propose a simple model in which candidates have a true latent quality that is drawn from a group-independent normal distribution.
We show that the demographic parity mechanism always increases the selection utility, while any $gamma$-rule weakly increases it.
arXiv Detail & Related papers (2020-06-24T13:08:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.