Are AlphaZero-like Agents Robust to Adversarial Perturbations?
- URL: http://arxiv.org/abs/2211.03769v1
- Date: Mon, 7 Nov 2022 18:43:25 GMT
- Title: Are AlphaZero-like Agents Robust to Adversarial Perturbations?
- Authors: Li-Cheng Lan, Huan Zhang, Ti-Rong Wu, Meng-Yu Tsai, I-Chen Wu, Cho-Jui
Hsieh
- Abstract summary: AlphaZero (AZ) has demonstrated that neural-network-based Go AIs can surpass human performance by a large margin.
We ask whether adversarial states exist for Go AIs that may lead them to play surprisingly wrong actions.
We develop the first adversarial attack on Go AIs that can efficiently search for adversarial states by strategically reducing the search space.
- Score: 73.13944217915089
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The success of AlphaZero (AZ) has demonstrated that neural-network-based Go
AIs can surpass human performance by a large margin. Given that the state space
of Go is extremely large and a human player can play the game from any legal
state, we ask whether adversarial states exist for Go AIs that may lead them to
play surprisingly wrong actions. In this paper, we first extend the concept of
adversarial examples to the game of Go: we generate perturbed states that are
``semantically'' equivalent to the original state by adding meaningless moves
to the game, and an adversarial state is a perturbed state leading to an
undoubtedly inferior action that is obvious even for Go beginners. However,
searching the adversarial state is challenging due to the large, discrete, and
non-differentiable search space. To tackle this challenge, we develop the first
adversarial attack on Go AIs that can efficiently search for adversarial states
by strategically reducing the search space. This method can also be extended to
other board games such as NoGo. Experimentally, we show that the actions taken
by both Policy-Value neural network (PV-NN) and Monte Carlo tree search (MCTS)
can be misled by adding one or two meaningless stones; for example, on 58\% of
the AlphaGo Zero self-play games, our method can make the widely used KataGo
agent with 50 simulations of MCTS plays a losing action by adding two
meaningless stones. We additionally evaluated the adversarial examples found by
our algorithm with amateur human Go players and 90\% of examples indeed lead
the Go agent to play an obviously inferior action. Our code is available at
\url{https://PaperCode.cc/GoAttack}.
Related papers
- Strategy Game-Playing with Size-Constrained State Abstraction [44.99833362998488]
Playing strategy games is a challenging problem for artificial intelligence (AI)
One of the major challenges is the large search space due to a diverse set of game components.
State abstraction has been applied to search-based game AI and has brought significant performance improvements.
arXiv Detail & Related papers (2024-08-12T14:50:18Z) - AlphaZero Gomoku [9.434566356382529]
We broaden the use of AlphaZero to Gomoku, an age-old tactical board game also referred to as "Five in a Row"
Our tests demonstrate AlphaZero's versatility in adapting to games other than Go.
arXiv Detail & Related papers (2023-09-04T00:20:06Z) - Targeted Search Control in AlphaZero for Effective Policy Improvement [93.30151539224144]
We introduce Go-Exploit, a novel search control strategy for AlphaZero.
Go-Exploit samples the start state of its self-play trajectories from an archive of states of interest.
Go-Exploit learns with a greater sample efficiency than standard AlphaZero.
arXiv Detail & Related papers (2023-02-23T22:50:24Z) - Adversarial Policies Beat Superhuman Go AIs [54.15639517188804]
We attack the state-of-the-art Go-playing AI system KataGo by training adversarial policies against it.
Our adversaries do not win by playing Go well. Instead, they trick KataGo into making serious blunders.
Our results demonstrate that even superhuman AI systems may harbor surprising failure modes.
arXiv Detail & Related papers (2022-11-01T03:13:20Z) - DanZero: Mastering GuanDan Game with Reinforcement Learning [121.93690719186412]
Card game AI has always been a hot topic in the research of artificial intelligence.
In this paper, we are devoted to developing an AI program for a more complex card game, GuanDan.
We propose the first AI program DanZero for GuanDan using reinforcement learning technique.
arXiv Detail & Related papers (2022-10-31T06:29:08Z) - Mastering the Game of Stratego with Model-Free Multiagent Reinforcement
Learning [86.37438204416435]
Stratego is one of the few iconic board games that Artificial Intelligence (AI) has not yet mastered.
Decisions in Stratego are made over a large number of discrete actions with no obvious link between action and outcome.
DeepNash beats existing state-of-the-art AI methods in Stratego and achieved a yearly (2022) and all-time top-3 rank on the Gravon games platform.
arXiv Detail & Related papers (2022-06-30T15:53:19Z) - Mastering Terra Mystica: Applying Self-Play to Multi-agent Cooperative
Board Games [0.0]
In this paper, we explore and compare multiple algorithms for solving the complex strategy game of Terra Mystica.
We apply these breakthroughs to a novel state-representation of TM with the goal of creating an AI that will rival human players.
In the end, we discuss the success and shortcomings of this method by comparing against multiple baselines and typical human scores.
arXiv Detail & Related papers (2021-02-21T07:53:34Z) - Learning to Play Sequential Games versus Unknown Opponents [93.8672371143881]
We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.
We propose a novel algorithm for the learner when playing against an adversarial sequence of opponents.
Our results include algorithm's regret guarantees that depend on the regularity of the opponent's response.
arXiv Detail & Related papers (2020-07-10T09:33:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.