Semi-Strongly solved: a New Definition Leading Computer to Perfect Gameplay
- URL: http://arxiv.org/abs/2411.01029v1
- Date: Fri, 01 Nov 2024 21:00:46 GMT
- Title: Semi-Strongly solved: a New Definition Leading Computer to Perfect Gameplay
- Authors: Hiroki Takizawa,
- Abstract summary: Several definitions exist for solving the game,' but they are markedly different regarding computational cost and the detail of insights derived.
We introduce a novel definition called semi-strongly solved' and propose an algorithm to achieve this type of solution efficiently.
- Score: 0.0
- License:
- Abstract: Solving combinatorial games has been a classic research topic in artificial intelligence because solutions can offer essential information to improve gameplay. Several definitions exist for `solving the game,' but they are markedly different regarding computational cost and the detail of insights derived. In this study, we introduce a novel definition called `semi-strongly solved' and propose an algorithm to achieve this type of solution efficiently. This new definition addresses existing gaps because of its intermediate computational cost and the quality of the solution. To demonstrate the potential of our approach, we derive the theoretical computational complexity of our algorithm under a simple condition, and apply it to semi-strongly solve the game of 6x6 Othello. This study raises many new research goals in this research area.
Related papers
- Game Solving with Online Fine-Tuning [17.614045403579244]
This paper investigates applying online fine-tuning while searching and proposes two methods to learn tailor-designed computations for game solving.
Our experiments show that using online fine-tuning can solve a series of challenging 7x7 Killall-Go problems, using only 23.54% of time compared to the baseline.
arXiv Detail & Related papers (2023-11-13T09:09:52Z) - Responsible AI (RAI) Games and Ensembles [30.110052769733247]
We provide a general framework for studying problems, which we refer to as Responsible AI (RAI) games.
We provide two classes of algorithms for solving these games: (a) game-play based algorithms, and (b) greedy stagewise estimation algorithms.
We empirically demonstrate the applicability and competitive performance of our techniques for solving several RAI problems, particularly around subpopulation shift.
arXiv Detail & Related papers (2023-10-28T22:17:30Z) - Space Net Optimization [1.0323063834827415]
Most metaheuristic algorithms rely on a few searched solutions to guide later searches during the convergence process.
We present a novel metaheuristic algorithm called space net optimization (SNO)
It is equipped with a new mechanism called space net; thus, making it possible for a metaheuristic algorithm to use most information provided by all searched solutions to depict the landscape of the solution space.
arXiv Detail & Related papers (2023-05-31T15:44:18Z) - The Update-Equivalence Framework for Decision-Time Planning [78.44953498421854]
We introduce an alternative framework for decision-time planning that is not based on solving subgames, but rather on update equivalence.
We derive a provably sound search algorithm for fully cooperative games based on mirror descent and a search algorithm for adversarial games based on magnetic mirror descent.
arXiv Detail & Related papers (2023-04-25T20:28:55Z) - Hardness of Independent Learning and Sparse Equilibrium Computation in
Markov Games [70.19141208203227]
We consider the problem of decentralized multi-agent reinforcement learning in Markov games.
We show that no algorithm attains no-regret in general-sum games when executed independently by all players.
We show that our lower bounds hold even for seemingly easier setting in which all agents are controlled by a centralized algorithm.
arXiv Detail & Related papers (2023-03-22T03:28:12Z) - An AlphaZero-Inspired Approach to Solving Search Problems [63.24965775030674]
We adapt the methods and techniques used in AlphaZero for solving search problems.
We describe possible representations in terms of easy-instance solvers and self-reductions.
We also describe a version of Monte Carlo tree search adapted for search problems.
arXiv Detail & Related papers (2022-07-02T23:39:45Z) - The Machine Learning for Combinatorial Optimization Competition (ML4CO):
Results and Insights [59.93939636422896]
The ML4CO aims at improving state-of-the-art optimization solvers by replacing key components.
The competition featured three challenging tasks: finding the best feasible solution, producing the tightest optimality certificate, and giving an appropriate routing configuration.
arXiv Detail & Related papers (2022-03-04T17:06:00Z) - Near-Optimal Learning of Extensive-Form Games with Imperfect Information [54.55092907312749]
We present the first line of algorithms that require only $widetildemathcalO((XA+YB)/varepsilon2)$ episodes of play to find an $varepsilon$-approximate Nash equilibrium in two-player zero-sum games.
This improves upon the best known sample complexity of $widetildemathcalO((X2A+Y2B)/varepsilon2)$ by a factor of $widetildemathcalO(maxX,
arXiv Detail & Related papers (2022-02-03T18:18:28Z) - End-to-End Constrained Optimization Learning: A Survey [69.22203885491534]
It focuses on surveying the work on integrating solvers and optimization methods with machine learning architectures.
These approaches hold the promise to develop new hybrid machine learning and optimization methods to predict fast, approximate, structural, solutions to problems and to enable logical inference.
arXiv Detail & Related papers (2021-03-30T14:19:30Z) - SeaPearl: A Constraint Programming Solver guided by Reinforcement
Learning [0.0]
This paper presents the proof of concept for SeaPearl, a new constraint programming solver implemented in Julia.
SeaPearl supports machine learning routines in order to learn branching decisions using reinforcement learning.
Although not yet competitive with industrial solvers, SeaPearl aims to provide a flexible and open-source framework.
arXiv Detail & Related papers (2021-02-18T07:34:38Z) - Algorithms in Multi-Agent Systems: A Holistic Perspective from
Reinforcement Learning and Game Theory [2.5147566619221515]
Deep reinforcement learning has achieved outstanding results in recent years.
Recent works are exploring learning beyond single-agent scenarios and considering multi-agent scenarios.
Traditional game-theoretic algorithms, which, in turn, show bright application promise combined with modern algorithms and boosting computing power.
arXiv Detail & Related papers (2020-01-17T15:08:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.