Learn to Match with No Regret: Reinforcement Learning in Markov Matching
Markets
- URL: http://arxiv.org/abs/2203.03684v1
- Date: Mon, 7 Mar 2022 19:51:25 GMT
- Title: Learn to Match with No Regret: Reinforcement Learning in Markov Matching
Markets
- Authors: Yifei Min, Tianhao Wang, Ruitu Xu, Zhaoran Wang, Michael I. Jordan,
Zhuoran Yang
- Abstract summary: We study a Markov matching market involving a planner and a set of strategic agents on the two sides of the market.
We propose a reinforcement learning framework that integrates optimistic value iteration with maximum weight matching.
We prove that the algorithm achieves sublinear regret.
- Score: 151.03738099494765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study a Markov matching market involving a planner and a set of strategic
agents on the two sides of the market. At each step, the agents are presented
with a dynamical context, where the contexts determine the utilities. The
planner controls the transition of the contexts to maximize the cumulative
social welfare, while the agents aim to find a myopic stable matching at each
step. Such a setting captures a range of applications including ridesharing
platforms. We formalize the problem by proposing a reinforcement learning
framework that integrates optimistic value iteration with maximum weight
matching. The proposed algorithm addresses the coupled challenges of sequential
exploration, matching stability, and function approximation. We prove that the
algorithm achieves sublinear regret.
Related papers
- Putting Gale & Shapley to Work: Guaranteeing Stability Through Learning [14.448192914855674]
Two-sided matching markets describe a class of problems wherein participants from one side of the market must be matched to those from the other side according to their preferences.
We exploit the structure of stable solutions to devise algorithms that improve the likelihood of finding stable solutions.
arXiv Detail & Related papers (2024-10-06T06:47:53Z) - Independent Learning in Constrained Markov Potential Games [19.083595175045073]
Constrained Markov games offer a formal framework for modeling multi-agent reinforcement learning problems.
We propose an independent policy gradient algorithm for learning approximate constrained Nash equilibria.
arXiv Detail & Related papers (2024-02-27T20:57:35Z) - Local Optimization Achieves Global Optimality in Multi-Agent
Reinforcement Learning [139.53668999720605]
We present a multi-agent PPO algorithm in which the local policy of each agent is updated similarly to vanilla PPO.
We prove that with standard regularity conditions on the Markov game and problem-dependent quantities, our algorithm converges to the globally optimal policy at a sublinear rate.
arXiv Detail & Related papers (2023-05-08T16:20:03Z) - Inducing Stackelberg Equilibrium through Spatio-Temporal Sequential
Decision-Making in Multi-Agent Reinforcement Learning [17.101534531286298]
We construct a Nash-level policy model based on a conditional hypernetwork shared by all agents.
This approach allows for asymmetric training with symmetric execution, with each agent responding optimally conditioned on the decisions made by superior agents.
Experiments demonstrate that our method effectively converges to the SE policies in repeated matrix game scenarios.
arXiv Detail & Related papers (2023-04-20T14:47:54Z) - Towards Multi-Agent Reinforcement Learning driven Over-The-Counter
Market Simulations [16.48389671789281]
We study a game between liquidity provider and liquidity taker agents interacting in an over-the-counter market.
By playing against each other, our deep-reinforcement-learning-driven agents learn emergent behaviors.
We show convergence rates for our multi-agent policy gradient algorithm under a transitivity assumption.
arXiv Detail & Related papers (2022-10-13T17:06:08Z) - Faster Last-iterate Convergence of Policy Optimization in Zero-Sum
Markov Games [63.60117916422867]
This paper focuses on the most basic setting of competitive multi-agent RL, namely two-player zero-sum Markov games.
We propose a single-loop policy optimization method with symmetric updates from both agents, where the policy is updated via the entropy-regularized optimistic multiplicative weights update (OMWU) method.
Our convergence results improve upon the best known complexities, and lead to a better understanding of policy optimization in competitive Markov games.
arXiv Detail & Related papers (2022-10-03T16:05:43Z) - Efficient Model-based Multi-agent Reinforcement Learning via Optimistic
Equilibrium Computation [93.52573037053449]
H-MARL (Hallucinated Multi-Agent Reinforcement Learning) learns successful equilibrium policies after a few interactions with the environment.
We demonstrate our approach experimentally on an autonomous driving simulation benchmark.
arXiv Detail & Related papers (2022-03-14T17:24:03Z) - Learning Equilibria in Matching Markets from Bandit Feedback [139.29934476625488]
We develop a framework and algorithms for learning stable market outcomes under uncertainty.
Our work takes a first step toward elucidating when and how stable matchings arise in large, data-driven marketplaces.
arXiv Detail & Related papers (2021-08-19T17:59:28Z) - Navigating to the Best Policy in Markov Decision Processes [68.8204255655161]
We investigate the active pure exploration problem in Markov Decision Processes.
Agent sequentially selects actions and, from the resulting system trajectory, aims at the best as fast as possible.
arXiv Detail & Related papers (2021-06-05T09:16:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.