Approximated Multi-Agent Fitted Q Iteration
- URL: http://arxiv.org/abs/2104.09343v5
- Date: Wed, 5 Apr 2023 00:07:27 GMT
- Title: Approximated Multi-Agent Fitted Q Iteration
- Authors: Antoine Lesage-Landry and Duncan S. Callaway
- Abstract summary: We formulate an efficient approximation for batch reinforcement learning, the approximated multi-agent fitted Q iteration (AMAFQI)
We show that AMAFQI requires a number of computations that scales linearly with the number of agents whereas the analogous number of computations increase exponentially for the fitted Q iteration (FQI)
- Score: 0.046877128185218706
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We formulate an efficient approximation for multi-agent batch reinforcement
learning, the approximated multi-agent fitted Q iteration (AMAFQI). We present
a detailed derivation of our approach. We propose an iterative policy search
and show that it yields a greedy policy with respect to multiple approximations
of the centralized, learned Q-function. In each iteration and policy
evaluation, AMAFQI requires a number of computations that scales linearly with
the number of agents whereas the analogous number of computations increase
exponentially for the fitted Q iteration (FQI), a commonly used approaches in
batch reinforcement learning. This property of AMAFQI is fundamental for the
design of a tractable multi-agent approach. We evaluate the performance of
AMAFQI and compare it to FQI in numerical simulations. The simulations
illustrate the significant computation time reduction when using AMAFQI instead
of FQI in multi-agent problems and corroborate the similar performance of both
approaches.
Related papers
- Dual Ensembled Multiagent Q-Learning with Hypernet Regularizer [62.01554688056335]
Overestimation in the multiagent setting has received comparatively little attention.
We propose a novel hypernet regularizer on hypernetwork weights and biases to constrain the optimization of online global Q-network to prevent overestimation accumulation.
arXiv Detail & Related papers (2025-02-04T05:14:58Z) - Enhancing Question Answering Precision with Optimized Vector Retrieval and Instructions [1.2425910171551517]
Question-answering (QA) is an important application of Information Retrieval (IR) and language models.
We propose an innovative approach to improve QA task performances by integrating optimized vector retrievals and instruction methodologies.
arXiv Detail & Related papers (2024-11-01T21:14:04Z) - Q-learning for Quantile MDPs: A Decomposition, Performance, and Convergence Analysis [30.713243690224207]
In Markov decision processes (MDPs), quantile risk measures such as Value-at-Risk are a standard metric for modeling RL agents' preferences for certain outcomes.
This paper proposes a new Q-learning algorithm for quantile optimization in MDPs with strong convergence and performance guarantees.
arXiv Detail & Related papers (2024-10-31T16:53:20Z) - Enhancing Multi-Step Reasoning Abilities of Language Models through Direct Q-Function Optimization [50.485788083202124]
Reinforcement Learning (RL) plays a crucial role in aligning large language models with human preferences and improving their ability to perform complex tasks.
We introduce Direct Q-function Optimization (DQO), which formulates the response generation process as a Markov Decision Process (MDP) and utilizes the soft actor-critic (SAC) framework to optimize a Q-function directly parameterized by the language model.
Experimental results on two math problem-solving datasets, GSM8K and MATH, demonstrate that DQO outperforms previous methods, establishing it as a promising offline reinforcement learning approach for aligning language models.
arXiv Detail & Related papers (2024-10-11T23:29:20Z) - Fitted Q-Iteration via Max-Plus-Linear Approximation [0.0]
In particular, we incorporate these approximators to propose novel fitted Q-iteration (FQI) algorithms with provable convergence.
We show that the max-plus-linear regression within each iteration of the proposed FQI algorithm reduces to simple max-plus matrix-vector multiplications.
arXiv Detail & Related papers (2024-09-12T22:51:08Z) - DASA: Delay-Adaptive Multi-Agent Stochastic Approximation [64.32538247395627]
We consider a setting in which $N$ agents aim to speedup a common Approximation problem by acting in parallel and communicating with a central server.
To mitigate the effect of delays and stragglers, we propose textttDASA, a Delay-Adaptive algorithm for multi-agent Approximation.
arXiv Detail & Related papers (2024-03-25T22:49:56Z) - Pointer Networks with Q-Learning for Combinatorial Optimization [55.2480439325792]
We introduce the Pointer Q-Network (PQN), a hybrid neural architecture that integrates model-free Q-value policy approximation with Pointer Networks (Ptr-Nets)
Our empirical results demonstrate the efficacy of this approach, also testing the model in unstable environments.
arXiv Detail & Related papers (2023-11-05T12:03:58Z) - Performance Prediction for Multi-hop Questions [7.388002745070808]
We propose multHP, a novel pre-retrieval method for predicting the performance of open-domain multi-hop questions.
Our evaluation shows that the proposed model is a strong predictor of the performance, outperforming traditional single-hop QPP models.
arXiv Detail & Related papers (2023-08-12T01:34:41Z) - An Empirical Comparison of LM-based Question and Answer Generation
Methods [79.31199020420827]
Question and answer generation (QAG) consists of generating a set of question-answer pairs given a context.
In this paper, we establish baselines with three different QAG methodologies that leverage sequence-to-sequence language model (LM) fine-tuning.
Experiments show that an end-to-end QAG model, which is computationally light at both training and inference times, is generally robust and outperforms other more convoluted approaches.
arXiv Detail & Related papers (2023-05-26T14:59:53Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Cross Learning in Deep Q-Networks [82.20059754270302]
We propose a novel cross Q-learning algorithm, aim at alleviating the well-known overestimation problem in value-based reinforcement learning methods.
Our algorithm builds on double Q-learning, by maintaining a set of parallel models and estimate the Q-value based on a randomly selected network.
arXiv Detail & Related papers (2020-09-29T04:58:17Z) - Multi-Agent Determinantal Q-Learning [39.79718674655209]
We propose multi-agent determinantal Q-learning. Q-DPP promotes agents to acquire diverse behavioral models.
We demonstrate that Q-DPP generalizes major solutions including VDN, QMIX, and QTRAN on decentralizable cooperative tasks.
arXiv Detail & Related papers (2020-06-02T09:32:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.