Fairness Maximization among Offline Agents in Online-Matching Markets
- URL: http://arxiv.org/abs/2109.08934v1
- Date: Sat, 18 Sep 2021 13:41:42 GMT
- Title: Fairness Maximization among Offline Agents in Online-Matching Markets
- Authors: Will Ma, Pan Xu, and Yifan Xu
- Abstract summary: Matching markets involve heterogeneous agents (typically from two parties) who are paired for mutual benefit.
There are two features distinguishing OMMs from traditional matching markets.
We propose online matching algorithms which optimize for either individual or group-level fairness.
- Score: 18.689388188794023
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Matching markets involve heterogeneous agents (typically from two parties)
who are paired for mutual benefit. During the last decade, matching markets
have emerged and grown rapidly through the medium of the Internet. They have
evolved into a new format, called Online Matching Markets (OMMs), with examples
ranging from crowdsourcing to online recommendations to ridesharing. There are
two features distinguishing OMMs from traditional matching markets. One is the
dynamic arrival of one side of the market: we refer to these as online agents
while the rest are offline agents. Examples of online and offline agents
include keywords (online) and sponsors (offline) in Google Advertising; workers
(online) and tasks (offline) in Amazon Mechanical Turk (AMT); riders (online)
and drivers (offline when restricted to a short time window) in ridesharing.
The second distinguishing feature of OMMs is the real-time decision-making
element. However, studies have shown that the algorithms making decisions in
these OMMs leave disparities in the match rates of offline agents. For example,
tasks in neighborhoods of low socioeconomic status rarely get matched to gig
workers, and drivers of certain races/genders get discriminated against in
matchmaking. In this paper, we propose online matching algorithms which
optimize for either individual or group-level fairness among offline agents in
OMMs. We present two linear-programming (LP) based sampling algorithms, which
achieve online competitive ratios at least 0.725 for individual fairness
maximization (IFM) and 0.719 for group fairness maximization (GFM),
respectively. We conduct extensive numerical experiments and results show that
our boosted version of sampling algorithms are not only conceptually easy to
implement but also highly effective in practical instances of
fairness-maximization-related models.
Related papers
- Understanding the performance gap between online and offline alignment algorithms [63.137832242488926]
We show that offline algorithms train policy to become good at pairwise classification, while online algorithms are good at generations.
This hints at a unique interplay between discriminative and generative capabilities, which is greatly impacted by the sampling process.
Our study sheds light on the pivotal role of on-policy sampling in AI alignment, and hints at certain fundamental challenges of offline alignment algorithms.
arXiv Detail & Related papers (2024-05-14T09:12:30Z) - AlberDICE: Addressing Out-Of-Distribution Joint Actions in Offline
Multi-Agent RL via Alternating Stationary Distribution Correction Estimation [65.4532392602682]
One of the main challenges in offline Reinforcement Learning (RL) is the distribution shift that arises from the learned policy deviating from the data collection policy.
This is often addressed by avoiding out-of-distribution (OOD) actions during policy improvement as their presence can lead to substantial performance degradation.
We introduce AlberDICE, an offline MARL algorithm that performs centralized training of individual agents based on stationary distribution optimization.
arXiv Detail & Related papers (2023-11-03T18:56:48Z) - Learning for Edge-Weighted Online Bipartite Matching with Robustness
Guarantees [28.737193318136725]
We propose a novel approach to edge-weighted online bipartite matching with robustness guarantees (LOMAR)
LOMAR achieves both good average-case and worst-case performance.
arXiv Detail & Related papers (2023-05-31T20:41:42Z) - Preventing Discriminatory Decision-making in Evolving Data Streams [8.952662914331901]
Bias in machine learning has rightly received significant attention over the last decade.
Most fair machine learning (fair-ML) work to address bias in decision-making systems has focused solely on the offline setting.
Despite the wide prevalence of online systems in the real world, work on identifying and correcting bias in the online setting is severely lacking.
arXiv Detail & Related papers (2023-02-16T01:20:08Z) - Learning From Good Trajectories in Offline Multi-Agent Reinforcement
Learning [98.07495732562654]
offline multi-agent reinforcement learning (MARL) aims to learn effective multi-agent policies from pre-collected datasets.
One agent learned by offline MARL often inherits this random policy, jeopardizing the performance of the entire team.
We propose a novel framework called Shared Individual Trajectories (SIT) to address this problem.
arXiv Detail & Related papers (2022-11-28T18:11:26Z) - Rawlsian Fairness in Online Bipartite Matching: Two-sided, Group, and
Individual [25.391491567929336]
Online bipartite-matching platforms are ubiquitous and find applications in important areas such as crowdsourcing and ridesharing.
In this paper, we generalize the existing work to offer fair treatment guarantees to both sides of the market simultaneously, at a calculated worst case drop to operator profit.
Our algorithms have theoretical guarantees and have adjustable parameters that can be tuned as desired to balance the trade-off between the utilities of the three sides.
arXiv Detail & Related papers (2022-01-16T11:25:17Z) - A Competitive Analysis of Online Multi-Agent Path Finding [6.172744281261983]
We study online Multi-Agent Path Finding (MAPF), where new agents are constantly revealed over time and all agents must find collision-free paths to their given goal locations.
We classify online MAPF algorithms into different categories based on (1) controllability (the set of agents that they can plan paths for at each time) and (2) rationality (the quality of paths they plan) and study the relationships between them.
arXiv Detail & Related papers (2021-06-22T00:05:29Z) - A Cooperative-Competitive Multi-Agent Framework for Auto-bidding in
Online Advertising [53.636153252400945]
We propose a general Multi-Agent reinforcement learning framework for Auto-Bidding, namely MAAB, to learn the auto-bidding strategies.
Our approach outperforms several baseline methods in terms of social welfare and guarantees the ad platform's revenue.
arXiv Detail & Related papers (2021-06-11T08:07:14Z) - Subgroup Fairness in Two-Sided Markets [7.854010769859225]
We suggest a novel market-clearing mechanism for two-sided markets.
We show that a certain non-linear problem can be approximated to any subgroup in time.
On the example of driver-ride assignment in an Uber-like system, we demonstrate the efficacy and scalability of the approach.
arXiv Detail & Related papers (2021-06-04T20:26:16Z) - Multi-Stage Decentralized Matching Markets: Uncertain Preferences and
Strategic Behaviors [91.3755431537592]
This article develops a framework for learning optimal strategies in real-world matching markets.
We show that there exists a welfare-versus-fairness trade-off that is characterized by the uncertainty level of acceptance.
We prove that participants can be better off with multi-stage matching compared to single-stage matching.
arXiv Detail & Related papers (2021-02-13T19:25:52Z) - Optimal Robustness-Consistency Trade-offs for Learning-Augmented Online
Algorithms [85.97516436641533]
We study the problem of improving the performance of online algorithms by incorporating machine-learned predictions.
The goal is to design algorithms that are both consistent and robust.
We provide the first set of non-trivial lower bounds for competitive analysis using machine-learned predictions.
arXiv Detail & Related papers (2020-10-22T04:51:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.