Randomised Optimism via Competitive Co-Evolution for Matrix Games with Bandit Feedback
- URL: http://arxiv.org/abs/2505.13562v1
- Date: Mon, 19 May 2025 10:05:55 GMT
- Title: Randomised Optimism via Competitive Co-Evolution for Matrix Games with Bandit Feedback
- Authors: Shishen Lin,
- Abstract summary: This work investigates two-player zero-sum matrix games with an unknown payoff matrix and bandit feedback.<n>We propose Competitive Co-evolutionary Bandit Learning (coebl), a novel algorithm that integrates evolutionary algorithms into the bandit framework.<n>We prove that coebl achieves sublinear regret, matching the performance of deterministic optimism-based methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning in games is a fundamental problem in machine learning and artificial intelligence, with numerous applications~\citep{silver2016mastering,schrittwieser2020mastering}. This work investigates two-player zero-sum matrix games with an unknown payoff matrix and bandit feedback, where each player observes their actions and the corresponding noisy payoff. Prior studies have proposed algorithms for this setting~\citep{o2021matrix,maiti2023query,cai2024uncoupled}, with \citet{o2021matrix} demonstrating the effectiveness of deterministic optimism (e.g., \ucb) in achieving sublinear regret. However, the potential of randomised optimism in matrix games remains theoretically unexplored. We propose Competitive Co-evolutionary Bandit Learning (\coebl), a novel algorithm that integrates evolutionary algorithms (EAs) into the bandit framework to implement randomised optimism through EA variation operators. We prove that \coebl achieves sublinear regret, matching the performance of deterministic optimism-based methods. To the best of our knowledge, this is the first theoretical regret analysis of an evolutionary bandit learning algorithm in matrix games. Empirical evaluations on diverse matrix game benchmarks demonstrate that \coebl not only achieves sublinear regret but also consistently outperforms classical bandit algorithms, including \exptr~\citep{auer2002nonstochastic}, the variant \exptrni~\citep{cai2024uncoupled}, and \ucb~\citep{o2021matrix}. These results highlight the potential of evolutionary bandit learning, particularly the efficacy of randomised optimism via evolutionary algorithms in game-theoretic settings.
Related papers
- Optimistic Online Learning in Symmetric Cone Games [3.124884279860061]
We introduce the Optimistic Symmetric Cone Multiplicative Weights Update algorithm and establish an iteration complexity of $mathcalO (1/epsilon)$ to reach an $epsilon$-saddle point.<n>A key technical contribution is a new proof of the strong convexity of the symmetric cone negative entropy with respect to the trace-one norm.
arXiv Detail & Related papers (2025-04-04T16:59:19Z) - Last-Iterate Convergence of Payoff-Based Independent Learning in Zero-Sum Stochastic Games [31.554420227087043]
We develop learning dynamics that are payoff-based, convergent, rational, and symmetric between the two players.
In the matrix game setting, the results imply a complexity of $O(epsilon-1)$ to find the Nash distribution.
In the game setting, the results also imply a complexity of $O(epsilon-8)$ to find a Nash equilibrium.
arXiv Detail & Related papers (2024-09-02T20:07:25Z) - Alternating Mirror Descent for Constrained Min-Max Games [44.46086335474311]
We study two-player bilinear zero-sum games with constrained strategy spaces.
We analyze the alternating mirror descent algorithm, in which each player takes turns to take action following the mirror descent algorithm for constrained optimization.
arXiv Detail & Related papers (2022-06-08T20:48:16Z) - A unified stochastic approximation framework for learning in games [82.74514886461257]
We develop a flexible approximation framework for analyzing the long-run behavior of learning in games (both continuous and finite)
The proposed analysis template incorporates a wide array of popular learning algorithms, including gradient-based methods, exponential/multiplicative weights for learning in finite games, optimistic and bandit variants of the above, etc.
arXiv Detail & Related papers (2022-06-08T14:30:38Z) - No-Regret Learning in Time-Varying Zero-Sum Games [99.86860277006318]
Learning from repeated play in a fixed zero-sum game is a classic problem in game theory and online learning.
We develop a single parameter-free algorithm that simultaneously enjoys favorable guarantees under three performance measures.
Our algorithm is based on a two-layer structure with a meta-algorithm learning over a group of black-box base-learners satisfying a certain property.
arXiv Detail & Related papers (2022-01-30T06:10:04Z) - Optimal Gradient-based Algorithms for Non-concave Bandit Optimization [76.57464214864756]
This work considers a large family of bandit problems where the unknown underlying reward function is non-concave.
Our algorithms are based on a unified zeroth-order optimization paradigm that applies in great generality.
We show that the standard optimistic algorithms are sub-optimal by dimension factors.
arXiv Detail & Related papers (2021-07-09T16:04:24Z) - Minimax Optimization with Smooth Algorithmic Adversaries [59.47122537182611]
We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
arXiv Detail & Related papers (2021-06-02T22:03:36Z) - A Scalable, Adaptive and Sound Nonconvex Regularizer for Low-rank Matrix
Completion [60.52730146391456]
We propose a new non scalable low-rank regularizer called "nuclear Frobenius norm" regularizer, which is adaptive and sound.
It bypasses the computation of singular values and allows fast optimization by algorithms.
It obtains state-of-the-art recovery performance while being the fastest in existing matrix learning methods.
arXiv Detail & Related papers (2020-08-14T18:47:58Z) - Faster Game Solving via Predictive Blackwell Approachability: Connecting
Regret Matching and Mirror Descent [119.5481797273995]
Follow-the-regularized-leader (FTRL) and online mirror descent (OMD) are the most prevalent regret minimizers in online convex optimization.
We show that RM and RM+ are the algorithms that result from running FTRL and OMD, respectively, to select the halfspace to force at all times in the underlying Blackwell approachability game.
In experiments across 18 common zero-sum extensive-form benchmark games, we show that predictive RM+ coupled with counterfactual regret minimization converges vastly faster than the fastest prior algorithms.
arXiv Detail & Related papers (2020-07-28T16:49:55Z) - Matrix games with bandit feedback [33.637621576707076]
We study a version of the classical zero-sum matrix game with unknown payoff matrix and bandit feedback.
We show that Thompson fails catastrophically in this setting and provide empirical comparison to existing algorithms.
arXiv Detail & Related papers (2020-06-09T09:36:21Z) - Learning Zero-Sum Simultaneous-Move Markov Games Using Function
Approximation and Correlated Equilibrium [116.56359444619441]
We develop provably efficient reinforcement learning algorithms for two-player zero-sum finite-horizon Markov games.
In the offline setting, we control both players and aim to find the Nash Equilibrium by minimizing the duality gap.
In the online setting, we control a single player playing against an arbitrary opponent and aim to minimize the regret.
arXiv Detail & Related papers (2020-02-17T17:04:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.