NNCFR: Minimize Counterfactual Regret with Neural Networks
- URL: http://arxiv.org/abs/2105.12328v1
- Date: Wed, 26 May 2021 04:58:36 GMT
- Title: NNCFR: Minimize Counterfactual Regret with Neural Networks
- Authors: Huale Li, Xuan Wang, Zengyue Guo, Jiajia Zhang, Shuhan Qi
- Abstract summary: This paper introduces textitNeural Network Counterfactual Regret Minimization (NNCFR), an improved variant of textitDeep CFR.
The textitNNCFR converges faster and performs more stable than textitDeep CFR, and outperforms textitDeep CFR with respect to exploitability and head-to-head performance on test games.
- Score: 4.418221583366099
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Counterfactual Regret Minimization (CFR)} is the popular method for finding
approximate Nash equilibrium in two-player zero-sum games with imperfect
information. CFR solves games by travsersing the full game tree iteratively,
which limits its scalability in larger games. When applying CFR to solve
large-scale games in previously, large-scale games are abstracted into
small-scale games firstly. Secondly, CFR is used to solve the abstract game.
And finally, the solution strategy is mapped back to the original large-scale
game. However, this process requires considerable expert knowledge, and the
accuracy of abstraction is closely related to expert knowledge. In addition,
the abstraction also loses certain information, which will eventually affect
the accuracy of the solution strategy. Towards this problem, a recent method,
\textit{Deep CFR} alleviates the need for abstraction and expert knowledge by
applying deep neural networks directly to CFR in full games. In this paper, we
introduces \textit{Neural Network Counterfactual Regret Minimization (NNCFR)},
an improved variant of \textit{Deep CFR} that has a faster convergence by
constructing a dueling netwok as the value network. Moreover, an evaluation
module is designed by combining the value network and Monte Carlo, which
reduces the approximation error of the value network. In addition, a new loss
function is designed in the procedure of training policy network in the
proposed \textit{NNCFR}, which can be good to make the policy network more
stable. The extensive experimental tests are conducted to show that the
\textit{NNCFR} converges faster and performs more stable than \textit{Deep
CFR}, and outperforms \textit{Deep CFR} with respect to exploitability and
head-to-head performance on test games.
Related papers
- Hierarchical Deep Counterfactual Regret Minimization [53.86223883060367]
In this paper, we introduce the first hierarchical version of Deep CFR, an innovative method that boosts learning efficiency in tasks involving extensively large state spaces and deep game trees.
A notable advantage of HDCFR over previous works is its ability to facilitate learning with predefined (human) expertise and foster the acquisition of skills that can be transferred to similar tasks.
arXiv Detail & Related papers (2023-05-27T02:05:41Z) - ESCHER: Eschewing Importance Sampling in Games by Computing a History
Value Function to Estimate Regret [97.73233271730616]
Recent techniques for approximating Nash equilibria in very large games leverage neural networks to learn approximately optimal policies (strategies)
DREAM, the only current CFR-based neural method that is model free and therefore scalable to very large games, trains a neural network on an estimated regret target that can have extremely high variance due to an importance sampling term inherited from Monte Carlo CFR (MCCFR)
We show that a deep learning version of ESCHER outperforms the prior state of the art -- DREAM and neural fictitious self play (NFSP) -- and the difference becomes dramatic as game size increases.
arXiv Detail & Related papers (2022-06-08T18:43:45Z) - Equivalence Analysis between Counterfactual Regret Minimization and
Online Mirror Descent [67.60077332154853]
Counterfactual Regret Minimization (CFR) is a regret minimization algorithm that minimizes the total regret by minimizing the local counterfactual regrets.
Follow-the-Regularized-Lead (FTRL) and Online Mirror Descent (OMD) algorithms are regret minimization algorithms in Online Convex Optimization.
We provide a new way to analyze and extend CFRs, by proving that CFR with Regret Matching and CFR with Regret Matching+ are special forms of FTRL and OMD.
arXiv Detail & Related papers (2021-10-11T02:12:25Z) - Model-Free Online Learning in Unknown Sequential Decision Making
Problems and Games [114.90723492840499]
In large two-player zero-sum imperfect-information games, modern extensions of counterfactual regret minimization (CFR) are currently the practical state of the art for computing a Nash equilibrium.
We formalize an online learning setting in which the strategy space is not known to the agent.
We give an efficient algorithm that achieves $O(T3/4)$ regret with high probability for that setting, even when the agent faces an adversarial environment.
arXiv Detail & Related papers (2021-03-08T04:03:24Z) - Model-free Neural Counterfactual Regret Minimization with Bootstrap
Learning [10.816436463322237]
Current CFR algorithms have to approximate the cumulative regrets with neural networks.
A new CFR variant, Recursive CFR, is proposed, in which the cumulative regrets are recovered by Recursive Substitute Values (RSVs)
It is proved the new Recursive CFR converges to a Nash equilibrium.
Experimental results show that the new algorithm can match the state-of-the-art neural CFR algorithms but with less training overhead.
arXiv Detail & Related papers (2020-12-03T12:26:50Z) - Recurrent Feature Reasoning for Image Inpainting [110.24760191732905]
Recurrent Feature Reasoning (RFR) network is mainly constructed by a plug-and-play Recurrent Feature Reasoning module and a Knowledge Consistent Attention (KCA) module.
RFR module recurrently infers the hole boundaries of the convolutional feature maps and then uses them as clues for further inference.
To capture information from distant places in the feature map for RFR, we further develop KCA and incorporate it in RFR.
arXiv Detail & Related papers (2020-08-09T14:40:04Z) - Faster Game Solving via Predictive Blackwell Approachability: Connecting
Regret Matching and Mirror Descent [119.5481797273995]
Follow-the-regularized-leader (FTRL) and online mirror descent (OMD) are the most prevalent regret minimizers in online convex optimization.
We show that RM and RM+ are the algorithms that result from running FTRL and OMD, respectively, to select the halfspace to force at all times in the underlying Blackwell approachability game.
In experiments across 18 common zero-sum extensive-form benchmark games, we show that predictive RM+ coupled with counterfactual regret minimization converges vastly faster than the fastest prior algorithms.
arXiv Detail & Related papers (2020-07-28T16:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.