Towards Controllable Agent in MOBA Games with Generative Modeling
- URL: http://arxiv.org/abs/2112.08093v1
- Date: Wed, 15 Dec 2021 13:09:22 GMT
- Title: Towards Controllable Agent in MOBA Games with Generative Modeling
- Authors: Shubao Zhang
- Abstract summary: We propose novel methods to develop action controllable agent that behaves like a human.
We devise a deep latent alignment neural network model for training agent, and a corresponding sampling algorithm for controlling an agent's action.
Both simulated and online experiments in the game Honor of Kings demonstrate the efficacy of the proposed methods.
- Score: 0.45687771576879593
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose novel methods to develop action controllable agent that behaves
like a human and has the ability to align with human players in Multiplayer
Online Battle Arena (MOBA) games. By modeling the control problem as an action
generation process, we devise a deep latent alignment neural network model for
training agent, and a corresponding sampling algorithm for controlling an
agent's action. Particularly, we propose deterministic and stochastic attention
implementations of the core latent alignment model. Both simulated and online
experiments in the game Honor of Kings demonstrate the efficacy of the proposed
methods.
Related papers
- Human-Agent Coordination in Games under Incomplete Information via Multi-Step Intent [21.170542003568674]
Strategic coordination between autonomous agents and human partners can be modeled as turn-based cooperative games.
We extend a turn-based game under incomplete information to allow players to take multiple actions per turn rather than a single action.
arXiv Detail & Related papers (2024-10-23T19:37:19Z) - Games for AI Control: Models of Safety Evaluations of AI Deployment Protocols [52.40622903199512]
This paper introduces AI-Control Games, a formal decision-making model of the red-teaming exercise as a multi-objective, partially observable game.
We apply our formalism to model, evaluate and synthesise protocols for deploying untrusted language models as programming assistants.
arXiv Detail & Related papers (2024-09-12T12:30:07Z) - Mastering the Game of Guandan with Deep Reinforcement Learning and
Behavior Regulating [16.718186690675164]
We propose a framework named GuanZero for AI agents to master the game of Guandan.
The main contribution of this paper is about regulating agents' behavior through a carefully designed neural network encoding scheme.
arXiv Detail & Related papers (2024-02-21T07:26:06Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Decision-making with Speculative Opponent Models [10.594910251058087]
We introduce Distributional Opponent-aided Multi-agent Actor-Critic (DOMAC)
DOMAC is the first speculative opponent modelling algorithm that relies solely on local information (i.e., the controlled agent's observations, actions, and rewards)
arXiv Detail & Related papers (2022-11-22T01:29:47Z) - Training and Evaluation of Deep Policies using Reinforcement Learning
and Generative Models [67.78935378952146]
GenRL is a framework for solving sequential decision-making problems.
It exploits the combination of reinforcement learning and latent variable generative models.
We experimentally determine the characteristics of generative models that have most influence on the performance of the final policy training.
arXiv Detail & Related papers (2022-04-18T22:02:32Z) - Go-Blend behavior and affect [2.323282558557423]
This paper proposes a paradigm shift for affective computing by viewing the affect modeling task as a reinforcement learning process.
In this initial study, we test our framework in an arcade game by training Go-Explore agents to both play optimally and attempt to mimic human demonstrations of arousal.
arXiv Detail & Related papers (2021-09-24T17:04:30Z) - Risk-Sensitive Sequential Action Control with Multi-Modal Human
Trajectory Forecasting for Safe Crowd-Robot Interaction [55.569050872780224]
We present an online framework for safe crowd-robot interaction based on risk-sensitive optimal control, wherein the risk is modeled by the entropic risk measure.
Our modular approach decouples the crowd-robot interaction into learning-based prediction and model-based control.
A simulation study and a real-world experiment show that the proposed framework can accomplish safe and efficient navigation while avoiding collisions with more than 50 humans in the scene.
arXiv Detail & Related papers (2020-09-12T02:02:52Z) - Moody Learners -- Explaining Competitive Behaviour of Reinforcement
Learning Agents [65.2200847818153]
In a competitive scenario, the agent does not only have a dynamic environment but also is directly affected by the opponents' actions.
Observing the Q-values of the agent is usually a way of explaining its behavior, however, do not show the temporal-relation between the selected actions.
arXiv Detail & Related papers (2020-07-30T11:30:42Z) - Variational Autoencoders for Opponent Modeling in Multi-Agent Systems [9.405879323049659]
Multi-agent systems exhibit complex behaviors that emanate from the interactions of multiple agents in a shared environment.
In this work, we are interested in controlling one agent in a multi-agent system and successfully learn to interact with the other agents that have fixed policies.
Modeling the behavior of other agents (opponents) is essential in understanding the interactions of the agents in the system.
arXiv Detail & Related papers (2020-01-29T13:38:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.