Two-Sided Matching Meets Fair Division
- URL: http://arxiv.org/abs/2107.07404v1
- Date: Thu, 15 Jul 2021 15:45:17 GMT
- Title: Two-Sided Matching Meets Fair Division
- Authors: Rupert Freeman, Evi Micha and Nisarg Shah
- Abstract summary: We introduce a new model for two-sided matching which allows us to borrow popular fairness notions from the fair division literature.
In our model, each agent is matched to multiple agents on the other side over whom she has additive preferences.
We demand fairness for each side separately, giving rise to notions such as double envy-freeness up to one match (DEF1) and double maximin share guarantee (DMMS)
- Score: 35.674275834421195
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We introduce a new model for two-sided matching which allows us to borrow
popular fairness notions from the fair division literature such as
envy-freeness up to one good and maximin share guarantee. In our model, each
agent is matched to multiple agents on the other side over whom she has
additive preferences. We demand fairness for each side separately, giving rise
to notions such as double envy-freeness up to one match (DEF1) and double
maximin share guarantee (DMMS). We show that (a slight strengthening of) DEF1
cannot always be achieved, but in the special case where both sides have
identical preferences, the round-robin algorithm with a carefully designed
agent ordering achieves it. In contrast, DMMS cannot be achieved even when both
sides have identical preferences.
Related papers
- Temporal Fair Division of Indivisible Items [61.235172150247614]
We study a fair division model where indivisible items arrive sequentially, and must be allocated immediately and irrevocably.
Previous work on online fair division has shown impossibility results in achieving approximate envy-freeness under these constraints.
We aim to ensure that the cumulative allocation at each round satisfies approximate temporal envy-freeness up to one item (TEF1)
arXiv Detail & Related papers (2024-10-18T16:43:36Z) - Simultaneously Achieving Group Exposure Fairness and Within-Group
Meritocracy in Stochastic Bandits [9.61116193836159]
We propose Bi-Level Fairness, which considers two levels of fairness.
At the first level, Bi-Level Fairness guarantees a certain minimum exposure to each group.
At the second level, we consider meritocratic fairness at the second level, which ensures that each arm is pulled according to its merit within the group.
arXiv Detail & Related papers (2024-02-08T11:19:58Z) - Intersectional Two-sided Fairness in Recommendation [41.96733939002468]
We propose a novel approach called Inter-sectional Two-sided Fairness Recommendation (ITFR)
Our method utilizes a sharpness-aware loss to perceive disadvantaged groups, and then uses collaborative loss balance to develop consistent distinguishing abilities for different intersectional groups.
Our proposed approach effectively alleviates the intersectional two-sided unfairness and consistently outperforms previous state-of-the-art methods.
arXiv Detail & Related papers (2024-02-05T08:56:24Z) - A Minimaximalist Approach to Reinforcement Learning from Human Feedback [49.45285664482369]
We present Self-Play Preference Optimization (SPO), an algorithm for reinforcement learning from human feedback.
Our approach is minimalist in that it does not require training a reward model nor unstable adversarial training.
We demonstrate that on a suite of continuous control tasks, we are able to learn significantly more efficiently than reward-model based approaches.
arXiv Detail & Related papers (2024-01-08T17:55:02Z) - First-Choice Maximality Meets Ex-ante and Ex-post Fairness [33.992491196317744]
We design randomized mechanisms that satisfy first-choice maximality (FCM), i.e., maximizing the number of agents assigned their first choices.
Our mechanisms also provide guarantees of ex-ante and ex-post fairness.
arXiv Detail & Related papers (2023-05-08T09:57:30Z) - Equalised Odds is not Equal Individual Odds: Post-processing for Group and Individual Fairness [13.894631477590362]
Group fairness is achieved by equalising prediction distributions between protected sub-populations.
individual fairness requires treating similar individuals alike.
This procedure may provide two similar individuals from the same protected group with classification odds that are disparately different.
arXiv Detail & Related papers (2023-04-19T16:02:00Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Favoring Eagerness for Remaining Items: Achieving Efficient and Fair
Assignments [34.62857280111384]
We propose novel properties of efficiency based on a subtly different notion to favoring higher ranks.
Specifically, we propose ex-post favoring-eagerness-for-remaining-items (ep-FERI) and ex-ante favoring-eagerness-for-remaining-items (ea-FERI)
arXiv Detail & Related papers (2021-09-18T07:00:04Z) - Bandit Learning in Decentralized Matching Markets [82.39061186055775]
We study two-sided matching markets in which one side of the market (the players) does not have a priori knowledge about its preferences for the other side (the arms) and is required to learn its preferences from experience.
This model extends the standard multi-armed bandit framework to a decentralized multiple player setting with competition.
We show that the algorithm is incentive compatible whenever the arms' preferences are shared, but not necessarily so when preferences are fully general.
arXiv Detail & Related papers (2020-12-14T08:58:07Z) - Necessarily Optimal One-Sided Matchings [49.0517583218216]
We study the classical problem of matching $n$ agents to $n$ objects, where the agents have ranked preferences over the objects.
Instead of asking the agents to report their complete preferences, our goal is to learn a desirable matching from partial preferences.
We design efficient algorithms to check if a given matching is NPO or NRM, and to check whether such a matching exists given top-$k$ partial preferences.
arXiv Detail & Related papers (2020-07-17T16:01:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.