Deep active inference agents using Monte-Carlo methods
- URL: http://arxiv.org/abs/2006.04176v2
- Date: Thu, 22 Oct 2020 13:17:36 GMT
- Title: Deep active inference agents using Monte-Carlo methods
- Authors: Zafeirios Fountas, Noor Sajid, Pedro A.M. Mediano, Karl Friston
- Abstract summary: We present a neural architecture for building deep active inference agents in continuous state-spaces using Monte-Carlo sampling.
Our approach enables agents to learn environmental dynamics efficiently, while maintaining task performance.
Results show that deep active inference provides a flexible framework to develop biologically-inspired intelligent agents.
- Score: 3.8233569758620054
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active inference is a Bayesian framework for understanding biological
intelligence. The underlying theory brings together perception and action under
one single imperative: minimizing free energy. However, despite its theoretical
utility in explaining intelligence, computational implementations have been
restricted to low-dimensional and idealized situations. In this paper, we
present a neural architecture for building deep active inference agents
operating in complex, continuous state-spaces using multiple forms of
Monte-Carlo (MC) sampling. For this, we introduce a number of techniques, novel
to active inference. These include: i) selecting free-energy-optimal policies
via MC tree search, ii) approximating this optimal policy distribution via a
feed-forward `habitual' network, iii) predicting future parameter belief
updates using MC dropouts and, finally, iv) optimizing state transition
precision (a high-end form of attention). Our approach enables agents to learn
environmental dynamics efficiently, while maintaining task performance, in
relation to reward-based counterparts. We illustrate this in a new toy
environment, based on the dSprites data-set, and demonstrate that active
inference agents automatically create disentangled representations that are apt
for modeling state transitions. In a more complex Animal-AI environment, our
agents (using the same neural architecture) are able to simulate future state
transitions and actions (i.e., plan), to evince reward-directed navigation -
despite temporary suspension of visual input. These results show that deep
active inference - equipped with MC methods - provides a flexible framework to
develop biologically-inspired intelligent agents, with applications in both
machine learning and cognitive science.
Related papers
- Demonstrating the Continual Learning Capabilities and Practical Application of Discrete-Time Active Inference [0.0]
Active inference is a mathematical framework for understanding how agents interact with their environments.
In this paper, we present a continual learning framework for agents operating in discrete time environments.
We demonstrate the agent's ability to relearn and refine its models efficiently, making it suitable for complex domains like finance and healthcare.
arXiv Detail & Related papers (2024-09-30T21:18:46Z) - Efficient Adaptation in Mixed-Motive Environments via Hierarchical Opponent Modeling and Planning [51.52387511006586]
We propose Hierarchical Opponent modeling and Planning (HOP), a novel multi-agent decision-making algorithm.
HOP is hierarchically composed of two modules: an opponent modeling module that infers others' goals and learns corresponding goal-conditioned policies.
HOP exhibits superior few-shot adaptation capabilities when interacting with various unseen agents, and excels in self-play scenarios.
arXiv Detail & Related papers (2024-06-12T08:48:06Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - Pessimism meets VCG: Learning Dynamic Mechanism Design via Offline
Reinforcement Learning [114.36124979578896]
We design a dynamic mechanism using offline reinforcement learning algorithms.
Our algorithm is based on the pessimism principle and only requires a mild assumption on the coverage of the offline data set.
arXiv Detail & Related papers (2022-05-05T05:44:26Z) - Stochastic Coherence Over Attention Trajectory For Continuous Learning
In Video Streams [64.82800502603138]
This paper proposes a novel neural-network-based approach to progressively and autonomously develop pixel-wise representations in a video stream.
The proposed method is based on a human-like attention mechanism that allows the agent to learn by observing what is moving in the attended locations.
Our experiments leverage 3D virtual environments and they show that the proposed agents can learn to distinguish objects just by observing the video stream.
arXiv Detail & Related papers (2022-04-26T09:52:31Z) - Inference of Affordances and Active Motor Control in Simulated Agents [0.5161531917413706]
We introduce an output-probabilistic, temporally predictive, modular artificial neural network architecture.
We show that our architecture develops latent states that can be interpreted as affordance maps.
In combination with active inference, we show that flexible, goal-directed behavior can be invoked.
arXiv Detail & Related papers (2022-02-23T14:13:04Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Online reinforcement learning with sparse rewards through an active
inference capsule [62.997667081978825]
This paper introduces an active inference agent which minimizes the novel free energy of the expected future.
Our model is capable of solving sparse-reward problems with a very high sample efficiency.
We also introduce a novel method for approximating the prior model from the reward function, which simplifies the expression of complex objectives.
arXiv Detail & Related papers (2021-06-04T10:03:36Z) - Realising Active Inference in Variational Message Passing: the
Outcome-blind Certainty Seeker [3.5450828190071655]
This paper provides a complete mathematical treatment of the active inference framework -- in discrete time and state spaces.
We leverage the theoretical connection between active inference and variational message passing.
We show that using a fully factorized variational distribution simplifies the expected free energy.
arXiv Detail & Related papers (2021-04-23T19:40:55Z) - Learning intuitive physics and one-shot imitation using
state-action-prediction self-organizing maps [0.0]
Humans learn by exploration and imitation, build causal models of the world, and use both to flexibly solve new tasks.
We suggest a simple but effective unsupervised model which develops such characteristics.
We demonstrate its performance on a set of several related, but different one-shot imitation tasks, which the agent flexibly solves in an active inference style.
arXiv Detail & Related papers (2020-07-03T12:29:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.