Bayesian Reinforcement Learning with Limited Cognitive Load
- URL: http://arxiv.org/abs/2305.03263v1
- Date: Fri, 5 May 2023 03:29:34 GMT
- Title: Bayesian Reinforcement Learning with Limited Cognitive Load
- Authors: Dilip Arumugam, Mark K. Ho, Noah D. Goodman, Benjamin Van Roy
- Abstract summary: Theory of adaptive behavior should account for complex interactions between an agent's learning history, decisions, and capacity constraints.
Recent work in computer science has begun to clarify the principles that shape these dynamics by bridging ideas from reinforcement learning, Bayesian decision-making, and rate-distortion theory.
- Score: 43.19983737333797
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: All biological and artificial agents must learn and make decisions given
limits on their ability to process information. As such, a general theory of
adaptive behavior should be able to account for the complex interactions
between an agent's learning history, decisions, and capacity constraints.
Recent work in computer science has begun to clarify the principles that shape
these dynamics by bridging ideas from reinforcement learning, Bayesian
decision-making, and rate-distortion theory. This body of work provides an
account of capacity-limited Bayesian reinforcement learning, a unifying
normative framework for modeling the effect of processing constraints on
learning and action selection. Here, we provide an accessible review of recent
algorithms and theoretical results in this setting, paying special attention to
how these ideas can be applied to studying questions in the cognitive and
behavioral sciences.
Related papers
- Demonstrating the Continual Learning Capabilities and Practical Application of Discrete-Time Active Inference [0.0]
Active inference is a mathematical framework for understanding how agents interact with their environments.
In this paper, we present a continual learning framework for agents operating in discrete time environments.
We demonstrate the agent's ability to relearn and refine its models efficiently, making it suitable for complex domains like finance and healthcare.
arXiv Detail & Related papers (2024-09-30T21:18:46Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - On Rate-Distortion Theory in Capacity-Limited Cognition & Reinforcement
Learning [43.19983737333797]
Decision-making agents in the real world do so under limited information-processing capabilities and without access to cognitive or computational resources.
We present a brief survey of information-theoretic models of capacity-limited decision making in biological and artificial agents.
arXiv Detail & Related papers (2022-10-30T16:39:40Z) - Interpreting Neural Policies with Disentangled Tree Representations [58.769048492254555]
We study interpretability of compact neural policies through the lens of disentangled representation.
We leverage decision trees to obtain factors of variation for disentanglement in robot learning.
We introduce interpretability metrics that measure disentanglement of learned neural dynamics.
arXiv Detail & Related papers (2022-10-13T01:10:41Z) - Active Inference in Robotics and Artificial Agents: Survey and
Challenges [51.29077770446286]
We review the state-of-the-art theory and implementations of active inference for state-estimation, control, planning and learning.
We showcase relevant experiments that illustrate its potential in terms of adaptation, generalization and robustness.
arXiv Detail & Related papers (2021-12-03T12:10:26Z) - Learning-Driven Decision Mechanisms in Physical Layer: Facts,
Challenges, and Remedies [23.446736654473753]
This paper introduces the common assumptions in the physical layer to highlight their discrepancies with practical systems.
As a solution, learning algorithms are examined by considering implementation steps and challenges.
arXiv Detail & Related papers (2021-02-14T22:26:44Z) - Interpretable Reinforcement Learning Inspired by Piaget's Theory of
Cognitive Development [1.7778609937758327]
This paper entertains the idea that theories such as language of thought hypothesis (LOTH), script theory, and Piaget's cognitive development theory provide complementary approaches.
The proposed framework can be viewed as a step towards achieving human-like cognition in artificial intelligent systems.
arXiv Detail & Related papers (2021-02-01T00:29:01Z) - Behavior Priors for Efficient Reinforcement Learning [97.81587970962232]
We consider how information and architectural constraints can be combined with ideas from the probabilistic modeling literature to learn behavior priors.
We discuss how such latent variable formulations connect to related work on hierarchical reinforcement learning (HRL) and mutual information and curiosity based objectives.
We demonstrate the effectiveness of our framework by applying it to a range of simulated continuous control domains.
arXiv Detail & Related papers (2020-10-27T13:17:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.