Habits and goals in synergy: a variational Bayesian framework for
behavior
- URL: http://arxiv.org/abs/2304.05008v1
- Date: Tue, 11 Apr 2023 06:28:14 GMT
- Title: Habits and goals in synergy: a variational Bayesian framework for
behavior
- Authors: Dongqi Han, Kenji Doya, Dongsheng Li, Jun Tani
- Abstract summary: How to behave efficiently and flexibly is a central problem for understanding biological agents and creating intelligent embodied AI.
It has been well known that behavior can be classified as two types: reward-maximizing habitual behavior, which is fast while inflexible; and goal-directed behavior, which is flexible while slow.
We propose to bridge the gap between the two behaviors, drawing on the principles of variational Bayesian theory.
- Score: 22.461524318820672
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: How to behave efficiently and flexibly is a central problem for understanding
biological agents and creating intelligent embodied AI. It has been well known
that behavior can be classified as two types: reward-maximizing habitual
behavior, which is fast while inflexible; and goal-directed behavior, which is
flexible while slow. Conventionally, habitual and goal-directed behaviors are
considered handled by two distinct systems in the brain. Here, we propose to
bridge the gap between the two behaviors, drawing on the principles of
variational Bayesian theory. We incorporate both behaviors in one framework by
introducing a Bayesian latent variable called "intention". The habitual
behavior is generated by using prior distribution of intention, which is
goal-less; and the goal-directed behavior is generated by the posterior
distribution of intention, which is conditioned on the goal. Building on this
idea, we present a novel Bayesian framework for modeling behaviors. Our
proposed framework enables skill sharing between the two kinds of behaviors,
and by leveraging the idea of predictive coding, it enables an agent to
seamlessly generalize from habitual to goal-directed behavior without requiring
additional training. The proposed framework suggests a fresh perspective for
cognitive science and embodied AI, highlighting the potential for greater
integration between habitual and goal-directed behaviors.
Related papers
- Inverse Decision Modeling: Learning Interpretable Representations of
Behavior [72.80902932543474]
We develop an expressive, unifying perspective on inverse decision modeling.
We use this to formalize the inverse problem (as a descriptive model)
We illustrate how this structure enables learning (interpretable) representations of (bounded) rationality.
arXiv Detail & Related papers (2023-10-28T05:05:01Z) - Interpretable Imitation Learning with Dynamic Causal Relations [65.18456572421702]
We propose to expose captured knowledge in the form of a directed acyclic causal graph.
We also design this causal discovery process to be state-dependent, enabling it to model the dynamics in latent causal graphs.
The proposed framework is composed of three parts: a dynamic causal discovery module, a causality encoding module, and a prediction module, and is trained in an end-to-end manner.
arXiv Detail & Related papers (2023-09-30T20:59:42Z) - Inference of Affordances and Active Motor Control in Simulated Agents [0.5161531917413706]
We introduce an output-probabilistic, temporally predictive, modular artificial neural network architecture.
We show that our architecture develops latent states that can be interpreted as affordance maps.
In combination with active inference, we show that flexible, goal-directed behavior can be invoked.
arXiv Detail & Related papers (2022-02-23T14:13:04Z) - Contrastive Active Inference [12.361539023886161]
We propose a contrastive objective for active inference that reduces the computational burden in learning the agent's generative model and planning future actions.
Our method performs notably better than likelihood-based active inference in image-based tasks, while also being computationally cheaper and easier to train.
arXiv Detail & Related papers (2021-10-19T16:20:49Z) - Modelling Behaviour Change using Cognitive Agent Simulations [0.0]
This paper presents work-in-progress research to apply selected behaviour change theories to simulated agents.
The research is focusing on complex agent architectures required for self-determined goal achievement in adverse circumstances.
arXiv Detail & Related papers (2021-10-16T19:19:08Z) - Goal-Directed Planning by Reinforcement Learning and Active Inference [16.694117274961016]
We propose a novel computational framework of decision making with Bayesian inference.
Goal-directed behavior is determined from the posterior distribution of $z$ by planning.
We demonstrate the effectiveness of the proposed framework by experiments in a sensorimotor navigation task with camera observations and continuous motor actions.
arXiv Detail & Related papers (2021-06-18T06:41:01Z) - Behavior Priors for Efficient Reinforcement Learning [97.81587970962232]
We consider how information and architectural constraints can be combined with ideas from the probabilistic modeling literature to learn behavior priors.
We discuss how such latent variable formulations connect to related work on hierarchical reinforcement learning (HRL) and mutual information and curiosity based objectives.
We demonstrate the effectiveness of our framework by applying it to a range of simulated continuous control domains.
arXiv Detail & Related papers (2020-10-27T13:17:18Z) - End-to-End Learning and Intervention in Games [60.41921763076017]
We provide a unified framework for learning and intervention in games.
We propose two approaches, respectively based on explicit and implicit differentiation.
The analytical results are validated using several real-world problems.
arXiv Detail & Related papers (2020-10-26T18:39:32Z) - Tracking Emotions: Intrinsic Motivation Grounded on Multi-Level
Prediction Error Dynamics [68.8204255655161]
We discuss how emotions arise when differences between expected and actual rates of progress towards a goal are experienced.
We present an intrinsic motivation architecture that generates behaviors towards self-generated and dynamic goals.
arXiv Detail & Related papers (2020-07-29T06:53:13Z) - Intrinsic Motivation for Encouraging Synergistic Behavior [55.10275467562764]
We study the role of intrinsic motivation as an exploration bias for reinforcement learning in sparse-reward synergistic tasks.
Our key idea is that a good guiding principle for intrinsic motivation in synergistic tasks is to take actions which affect the world in ways that would not be achieved if the agents were acting on their own.
arXiv Detail & Related papers (2020-02-12T19:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.