Agent Modelling under Partial Observability for Deep Reinforcement
Learning
- URL: http://arxiv.org/abs/2006.09447v4
- Date: Tue, 9 Nov 2021 10:37:18 GMT
- Title: Agent Modelling under Partial Observability for Deep Reinforcement
Learning
- Authors: Georgios Papoudakis, Filippos Christianos, Stefano V. Albrecht
- Abstract summary: Existing methods for agent modelling assume knowledge of the local observations and chosen actions of the modelled agents during execution.
We learn to extract representations about the modelled agents conditioned only on the local observations of the controlled agent.
The representations are used to augment the controlled agent's decision policy which is trained via deep reinforcement learning.
- Score: 12.903487594031276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modelling the behaviours of other agents is essential for understanding how
agents interact and making effective decisions. Existing methods for agent
modelling commonly assume knowledge of the local observations and chosen
actions of the modelled agents during execution. To eliminate this assumption,
we extract representations from the local information of the controlled agent
using encoder-decoder architectures. Using the observations and actions of the
modelled agents during training, our models learn to extract representations
about the modelled agents conditioned only on the local observations of the
controlled agent. The representations are used to augment the controlled
agent's decision policy which is trained via deep reinforcement learning; thus,
during execution, the policy does not require access to other agents'
information. We provide a comprehensive evaluation and ablations studies in
cooperative, competitive and mixed multi-agent environments, showing that our
method achieves higher returns than baseline methods which do not use the
learned representations.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.