Smoothing Policy Iteration for Zero-sum Markov Games
- URL: http://arxiv.org/abs/2212.01623v1
- Date: Sat, 3 Dec 2022 14:39:06 GMT
- Title: Smoothing Policy Iteration for Zero-sum Markov Games
- Authors: Yangang Ren, Yao Lyu, Wenxuan Wang, Shengbo Eben Li, Zeyang Li,
Jingliang Duan
- Abstract summary: We propose the smoothing policy robustness (SPI) algorithm to solve the zero-sum MGs approximately.
Specially, the adversarial policy is served as the weight function to enable an efficient sampling over action spaces.
We also propose a model-based algorithm called Smooth adversarial Actor-critic (SaAC) by extending SPI with the function approximations.
- Score: 9.158672246275348
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Zero-sum Markov Games (MGs) has been an efficient framework for multi-agent
systems and robust control, wherein a minimax problem is constructed to solve
the equilibrium policies. At present, this formulation is well studied under
tabular settings wherein the maximum operator is primarily and exactly solved
to calculate the worst-case value function. However, it is non-trivial to
extend such methods to handle complex tasks, as finding the maximum over
large-scale action spaces is usually cumbersome. In this paper, we propose the
smoothing policy iteration (SPI) algorithm to solve the zero-sum MGs
approximately, where the maximum operator is replaced by the weighted LogSumExp
(WLSE) function to obtain the nearly optimal equilibrium policies. Specially,
the adversarial policy is served as the weight function to enable an efficient
sampling over action spaces.We also prove the convergence of SPI and analyze
its approximation error in $\infty -$norm based on the contraction mapping
theorem. Besides, we propose a model-based algorithm called Smooth adversarial
Actor-critic (SaAC) by extending SPI with the function approximations. The
target value related to WLSE function is evaluated by the sampled trajectories
and then mean square error is constructed to optimize the value function, and
the gradient-ascent-descent methods are adopted to optimize the protagonist and
adversarial policies jointly. In addition, we incorporate the
reparameterization technique in model-based gradient back-propagation to
prevent the gradient vanishing due to sampling from the stochastic policies. We
verify our algorithm in both tabular and function approximation settings.
Results show that SPI can approximate the worst-case value function with a high
accuracy and SaAC can stabilize the training process and improve the
adversarial robustness in a large margin.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Nonconvex Stochastic Bregman Proximal Gradient Method for Nonconvex Composite Problems [9.202586157819693]
gradient methods for non composite objective functions typically rely on the Lipschitz smoothness of the differentiable part.
We propose a better approximation model that handles non-Lipschitz gradient in non objectives.
We show it achieves optimal robustness in terms of step selection sensitivity.
arXiv Detail & Related papers (2023-06-26T08:54:46Z) - Covariance Matrix Adaptation Evolutionary Strategy with Worst-Case
Ranking Approximation for Min--Max Optimization and its Application to
Berthing Control Tasks [19.263468901608785]
We consider a continuous min--max optimization problem $min_x in mathbbX max_y in mathbbYf(x,y)$ whose objective function is a black-box.
We propose a novel approach to minimize the worst-case objective function $F(x) = max_y f(x,y)$ directly.
arXiv Detail & Related papers (2023-03-28T15:50:56Z) - Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time
Guarantees [56.848265937921354]
Inverse reinforcement learning (IRL) aims to recover the reward function and the associated optimal policy.
Many algorithms for IRL have an inherently nested structure.
We develop a novel single-loop algorithm for IRL that does not compromise reward estimation accuracy.
arXiv Detail & Related papers (2022-10-04T17:13:45Z) - META-STORM: Generalized Fully-Adaptive Variance Reduced SGD for
Unbounded Functions [23.746620619512573]
Recent work overcomes the effect of having to compute gradients of "megabatches"
Work is widely used after update with competitive deep learning tasks.
arXiv Detail & Related papers (2022-09-29T15:12:54Z) - Implicitly Regularized RL with Implicit Q-Values [42.87920755961722]
The $Q$-function is a central quantity in many Reinforcement Learning (RL) algorithms for which RL agents behave following a (soft)-greedy policy.
We propose to parametrize the $Q$-function implicitly, as the sum of a log-policy and of a value function.
We derive a practical off-policy deep RL algorithm, suitable for large action spaces, and that enforces the softmax relation between the policy and the $Q$-value.
arXiv Detail & Related papers (2021-08-16T12:20:47Z) - Momentum Accelerates the Convergence of Stochastic AUPRC Maximization [80.8226518642952]
We study optimization of areas under precision-recall curves (AUPRC), which is widely used for imbalanced tasks.
We develop novel momentum methods with a better iteration of $O (1/epsilon4)$ for finding an $epsilon$stationary solution.
We also design a novel family of adaptive methods with the same complexity of $O (1/epsilon4)$, which enjoy faster convergence in practice.
arXiv Detail & Related papers (2021-07-02T16:21:52Z) - Sparse Bayesian Learning via Stepwise Regression [1.2691047660244335]
We propose a coordinate ascent algorithm for SBL termed Relevance Matching Pursuit (RMP)
As its noise variance parameter goes to zero, RMP exhibits a surprising connection to Stepwise Regression.
We derive novel guarantees for Stepwise Regression algorithms, which also shed light on RMP.
arXiv Detail & Related papers (2021-06-11T00:20:27Z) - Stochastic Optimization of Areas Under Precision-Recall Curves with
Provable Convergence [66.83161885378192]
Area under ROC (AUROC) and precision-recall curves (AUPRC) are common metrics for evaluating classification performance for imbalanced problems.
We propose a technical method to optimize AUPRC for deep learning.
arXiv Detail & Related papers (2021-04-18T06:22:21Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Provably Efficient Safe Exploration via Primal-Dual Policy Optimization [105.7510838453122]
We study the Safe Reinforcement Learning (SRL) problem using the Constrained Markov Decision Process (CMDP) formulation.
We present an provably efficient online policy optimization algorithm for CMDP with safe exploration in the function approximation setting.
arXiv Detail & Related papers (2020-03-01T17:47:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.