Rawlsian Fairness in Online Bipartite Matching: Two-sided, Group, and
Individual
- URL: http://arxiv.org/abs/2201.06021v3
- Date: Mon, 5 Jun 2023 00:06:56 GMT
- Title: Rawlsian Fairness in Online Bipartite Matching: Two-sided, Group, and
Individual
- Authors: Seyed A. Esmaeili, Sharmila Duppala, Davidson Cheng, Vedant Nanda,
Aravind Srinivasan, and John P. Dickerson
- Abstract summary: Online bipartite-matching platforms are ubiquitous and find applications in important areas such as crowdsourcing and ridesharing.
In this paper, we generalize the existing work to offer fair treatment guarantees to both sides of the market simultaneously, at a calculated worst case drop to operator profit.
Our algorithms have theoretical guarantees and have adjustable parameters that can be tuned as desired to balance the trade-off between the utilities of the three sides.
- Score: 25.391491567929336
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online bipartite-matching platforms are ubiquitous and find applications in
important areas such as crowdsourcing and ridesharing. In the most general
form, the platform consists of three entities: two sides to be matched and a
platform operator that decides the matching. The design of algorithms for such
platforms has traditionally focused on the operator's (expected) profit. Since
fairness has become an important consideration that was ignored in the existing
algorithms a collection of online matching algorithms have been developed that
give a fair treatment guarantee for one side of the market at the expense of a
drop in the operator's profit. In this paper, we generalize the existing work
to offer fair treatment guarantees to both sides of the market simultaneously,
at a calculated worst case drop to operator profit. We consider group and
individual Rawlsian fairness criteria. Moreover, our algorithms have
theoretical guarantees and have adjustable parameters that can be tuned as
desired to balance the trade-off between the utilities of the three sides. We
also derive hardness results that give clear upper bounds over the performance
of any algorithm.
Related papers
- Fair Allocation in Dynamic Mechanism Design [57.66441610380448]
We consider a problem where an auctioneer sells an indivisible good to groups of buyers in every round, for a total of $T$ rounds.
The auctioneer aims to maximize their discounted overall revenue while adhering to a fairness constraint that guarantees a minimum average allocation for each group.
arXiv Detail & Related papers (2024-05-31T19:26:05Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Sampling Individually-Fair Rankings that are Always Group Fair [9.333939443470944]
A fair ranking task asks to rank a set of items to maximize utility subject to satisfying group-fairness constraints.
Recent works identify uncertainty in the utilities of items as a primary cause of unfairness.
We give an efficient algorithm that samples rankings from an individually-fair distribution while ensuring that every output ranking is group fair.
arXiv Detail & Related papers (2023-06-21T01:26:34Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Competition, Alignment, and Equilibria in Digital Marketplaces [97.03797129675951]
We study a duopoly market where platform actions are bandit algorithms and the two platforms compete for user participation.
Our main finding is that competition in this market does not perfectly align market outcomes with user utility.
arXiv Detail & Related papers (2022-08-30T17:43:58Z) - Counterfactual Fairness Is Basically Demographic Parity [0.0]
Making fair decisions is crucial to ethically implementing machine learning algorithms in social settings.
We show that an algorithm which satisfies counterfactual fairness also satisfies demographic parity.
We formalize a concrete fairness goal: to preserve the order of individuals within protected groups.
arXiv Detail & Related papers (2022-08-07T23:38:59Z) - Efficient Algorithms For Fair Clustering with a New Fairness Notion [5.21410307583181]
We revisit the problem of fair clustering, first introduced by Chierichetti et al.
Existing solutions to fair clustering are either not scalable or do not achieve an optimal trade-off between clustering objective and fairness.
We propose a new notion of fairness, which we call $tau$-fair fairness, that strictly generalizes the balance property and enables a fine-grained efficiency vs. fairness trade-off.
arXiv Detail & Related papers (2021-09-02T04:52:49Z) - Learning Equilibria in Matching Markets from Bandit Feedback [139.29934476625488]
We develop a framework and algorithms for learning stable market outcomes under uncertainty.
Our work takes a first step toward elucidating when and how stable matchings arise in large, data-driven marketplaces.
arXiv Detail & Related papers (2021-08-19T17:59:28Z) - User Fairness, Item Fairness, and Diversity for Rankings in Two-Sided
Markets [28.537935838669423]
We show that user fairness, item fairness and diversity are fundamentally different concepts.
We present the first ranking algorithm that explicitly enforces all three desiderata.
arXiv Detail & Related papers (2020-10-04T02:53:09Z) - Controlling Fairness and Bias in Dynamic Learning-to-Rank [31.41843594914603]
We propose a learning algorithm that ensures notions of amortized group fairness, while simultaneously learning the ranking function from implicit feedback data.
The algorithm takes the form of a controller that integrates unbiased estimators for both fairness and utility.
In addition to its rigorous theoretical foundation and convergence guarantees, we find empirically that the algorithm is highly practical and robust.
arXiv Detail & Related papers (2020-05-29T17:57:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.