Fairness in Ranking under Uncertainty
- URL: http://arxiv.org/abs/2107.06720v1
- Date: Wed, 14 Jul 2021 14:10:16 GMT
- Title: Fairness in Ranking under Uncertainty
- Authors: Ashudeep Singh, David Kempe, Thorsten Joachims
- Abstract summary: Unfairness occurs when an agent with higher merit obtains a worse outcome than an agent with lower merit.
Our central point is that a primary cause of unfairness is uncertainty.
We show how to compute rankings that optimally trade off approximate fairness against utility to the principal.
- Score: 42.51950847766776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness has emerged as an important consideration in algorithmic
decision-making. Unfairness occurs when an agent with higher merit obtains a
worse outcome than an agent with lower merit. Our central point is that a
primary cause of unfairness is uncertainty. A principal or algorithm making
decisions never has access to the agents' true merit, and instead uses proxy
features that only imperfectly predict merit (e.g., GPA, star ratings,
recommendation letters). None of these ever fully capture an agent's merit; yet
existing approaches have mostly been defining fairness notions directly based
on observed features and outcomes.
Our primary point is that it is more principled to acknowledge and model the
uncertainty explicitly. The role of observed features is to give rise to a
posterior distribution of the agents' merits. We use this viewpoint to define a
notion of approximate fairness in ranking. We call an algorithm $\phi$-fair
(for $\phi \in [0,1]$) if it has the following property for all agents $x$ and
all $k$: if agent $x$ is among the top $k$ agents with respect to merit with
probability at least $\rho$ (according to the posterior merit distribution),
then the algorithm places the agent among the top $k$ agents in its ranking
with probability at least $\phi \rho$.
We show how to compute rankings that optimally trade off approximate fairness
against utility to the principal. In addition to the theoretical
characterization, we present an empirical analysis of the potential impact of
the approach in simulation studies. For real-world validation, we applied the
approach in the context of a paper recommendation system that we built and
fielded at a large conference.
Related papers
- On the Hardness of Decentralized Multi-Agent Policy Evaluation under Byzantine Attacks [12.696705862929337]
We study a fully-decentralized multi-agent policy evaluation problem in the presence of up to $f$ faulty agents.
In particular, we focus on the so-called Byzantine faulty model with model poisoning setting.
arXiv Detail & Related papers (2024-09-19T16:27:08Z) - Mind the Gap: A Causal Perspective on Bias Amplification in Prediction & Decision-Making [58.06306331390586]
We introduce the notion of a margin complement, which measures how much a prediction score $S$ changes due to a thresholding operation.
We show that under suitable causal assumptions, the influences of $X$ on the prediction score $S$ are equal to the influences of $X$ on the true outcome $Y$.
arXiv Detail & Related papers (2024-05-24T11:22:19Z) - Are Bounded Contracts Learnable and Approximately Optimal? [8.121834515103243]
This paper considers the hidden-action model of the principal-agent problem, in which a principal incentivizes an agent to work on a project using a contract.
We investigate whether contracts with bounded payments are learnable and approximately optimal.
arXiv Detail & Related papers (2024-02-22T12:19:19Z) - Robust and Performance Incentivizing Algorithms for Multi-Armed Bandits
with Strategic Agents [57.627352949446625]
We consider a variant of the multi-armed bandit problem.
Specifically, the arms are strategic agents who can improve their rewards or absorb them.
We identify a class of MAB algorithms which satisfy a collection of properties and show that they lead to mechanisms that incentivize top level performance at equilibrium.
arXiv Detail & Related papers (2023-12-13T06:54:49Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Fairness in Ranking under Disparate Uncertainty [24.401219403555814]
We argue that ranking can introduce unfairness if the uncertainty of the underlying relevance model differs between groups of options.
We propose Equal-Opportunity Ranking (EOR) as a new fairness criterion for ranking.
We show that EOR corresponds to a group-wise fair lottery among the relevant options even in the presence of disparate uncertainty.
arXiv Detail & Related papers (2023-09-04T13:49:48Z) - Fairness in Federated Learning via Core-Stability [16.340526776021143]
Federated learning provides an effective paradigm to jointly optimize a model benefited from rich distributed data.
It is intuitively "unfair" for agents with data of high quality to sacrifice their performance due to other agents with low quality data.
We propose an efficient federated learning protocol CoreFed to optimize a core stable predictor.
arXiv Detail & Related papers (2022-11-03T18:41:11Z) - Provably Efficient Offline Reinforcement Learning with Trajectory-Wise
Reward [66.81579829897392]
We propose a novel offline reinforcement learning algorithm called Pessimistic vAlue iteRaTion with rEward Decomposition (PARTED)
PARTED decomposes the trajectory return into per-step proxy rewards via least-squares-based reward redistribution, and then performs pessimistic value based on the learned proxy reward.
To the best of our knowledge, PARTED is the first offline RL algorithm that is provably efficient in general MDP with trajectory-wise reward.
arXiv Detail & Related papers (2022-06-13T19:11:22Z) - Group Meritocratic Fairness in Linear Contextual Bandits [32.15680917495674]
We study the linear contextual bandit problem where an agent has to select one candidate from a pool and each candidate belongs to a sensitive group.
We propose a notion of fairness that states that the agent's policy is fair when it selects a candidate with highest relative rank.
arXiv Detail & Related papers (2022-06-07T09:54:38Z) - Human-in-the-loop: Provably Efficient Preference-based Reinforcement
Learning with General Function Approximation [107.54516740713969]
We study human-in-the-loop reinforcement learning (RL) with trajectory preferences.
Instead of receiving a numeric reward at each step, the agent only receives preferences over trajectory pairs from a human overseer.
We propose the first optimistic model-based algorithm for PbRL with general function approximation.
arXiv Detail & Related papers (2022-05-23T09:03:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.