Achieving Diverse Objectives with AI-driven Prices in Deep Reinforcement
Learning Multi-agent Markets
- URL: http://arxiv.org/abs/2106.06060v1
- Date: Thu, 10 Jun 2021 21:26:17 GMT
- Title: Achieving Diverse Objectives with AI-driven Prices in Deep Reinforcement
Learning Multi-agent Markets
- Authors: Panayiotis Danassis, Aris Filos-Ratsikas, Boi Faltings
- Abstract summary: We propose a practical approach to computing market prices and allocations via a deep reinforcement learning policymaker agent.
Our policymaker is much more flexible, allowing us to tune the prices with regard to diverse objectives.
As a highlight of our findings, our policymaker is significantly more successful in maintaining resource sustainability.
- Score: 35.02265584959417
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a practical approach to computing market prices and allocations
via a deep reinforcement learning policymaker agent, operating in an
environment of other learning agents. Compared to the idealized market
equilibrium outcome -- which we use as a benchmark -- our policymaker is much
more flexible, allowing us to tune the prices with regard to diverse objectives
such as sustainability and resource wastefulness, fairness, buyers' and
sellers' welfare, etc. To evaluate our approach, we design a realistic market
with multiple and diverse buyers and sellers. Additionally, the sellers, which
are deep learning agents themselves, compete for resources in a common-pool
appropriation environment based on bio-economic models of commercial fisheries.
We demonstrate that: (a) The introduced policymaker is able to achieve
comparable performance to the market equilibrium, showcasing the potential of
such approaches in markets where the equilibrium prices can not be efficiently
computed. (b) Our policymaker can notably outperform the equilibrium solution
on certain metrics, while at the same time maintaining comparable performance
for the remaining ones. (c) As a highlight of our findings, our policymaker is
significantly more successful in maintaining resource sustainability, compared
to the market outcome, in scarce resource environments.
Related papers
- Evaluating the Impact of Multiple DER Aggregators on Wholesale Energy Markets: A Hybrid Mean Field Approach [2.0535683313855055]
The integration of distributed energy resources into wholesale energy markets can greatly enhance grid flexibility, improve market efficiency, and contribute to a more sustainable energy future.
We study a wholesale market model featuring multiple DER aggregators, each controlling a portfolio of DER resources and bidding into the market on behalf of the DER asset owners.
We propose a reinforcement learning (RL)-based method to help each agent learn optimal strategies within the MFG framework, enhancing their ability to adapt to market conditions and uncertainties.
arXiv Detail & Related papers (2024-08-27T14:56:28Z) - Large-Scale Contextual Market Equilibrium Computation through Deep Learning [10.286961524745966]
We introduce MarketFCNet, a deep learning-based method for approximating market equilibrium.
We show that MarketFCNet delivers competitive performance and significantly lower running times compared to existing methods.
arXiv Detail & Related papers (2024-06-11T03:36:00Z) - Finding Regularized Competitive Equilibria of Heterogeneous Agent
Macroeconomic Models with Reinforcement Learning [151.03738099494765]
We study a heterogeneous agent macroeconomic model with an infinite number of households and firms competing in a labor market.
We propose a data-driven reinforcement learning framework that finds the regularized competitive equilibrium of the model.
arXiv Detail & Related papers (2023-02-24T17:16:27Z) - Parity in Markets -- Methods, Costs, and Consequences [109.5267969644294]
We show how market designers can use taxes or subsidies in Fisher markets to ensure that market equilibrium outcomes fall within certain constraints.
We adapt various types of fairness constraints proposed in existing literature to the market case and show who benefits and who loses from these constraints.
arXiv Detail & Related papers (2022-10-05T22:27:44Z) - Efficient Model-based Multi-agent Reinforcement Learning via Optimistic
Equilibrium Computation [93.52573037053449]
H-MARL (Hallucinated Multi-Agent Reinforcement Learning) learns successful equilibrium policies after a few interactions with the environment.
We demonstrate our approach experimentally on an autonomous driving simulation benchmark.
arXiv Detail & Related papers (2022-03-14T17:24:03Z) - Finding General Equilibria in Many-Agent Economic Simulations Using Deep
Reinforcement Learning [72.23843557783533]
We show that deep reinforcement learning can discover stable solutions that are epsilon-Nash equilibria for a meta-game over agent types.
Our approach is more flexible and does not need unrealistic assumptions, e.g., market clearing.
We demonstrate our approach in real-business-cycle models, a representative family of DGE models, with 100 worker-consumers, 10 firms, and a government who taxes and redistributes.
arXiv Detail & Related papers (2022-01-03T17:00:17Z) - Deep Q-Learning Market Makers in a Multi-Agent Simulated Stock Market [58.720142291102135]
This paper focuses precisely on the study of these markets makers strategies from an agent-based perspective.
We propose the application of Reinforcement Learning (RL) for the creation of intelligent market markers in simulated stock markets.
arXiv Detail & Related papers (2021-12-08T14:55:21Z) - Strategic bidding in freight transport using deep reinforcement learning [0.0]
This paper presents a multi-agent reinforcement learning algorithm to represent strategic bidding behavior in freight transport markets.
Using this algorithm, we investigate whether feasible market equilibriums arise without any central control or communication between agents.
arXiv Detail & Related papers (2021-02-18T10:17:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.