Deriving time-averaged active inference from control principles
- URL: http://arxiv.org/abs/2208.10601v1
- Date: Mon, 22 Aug 2022 21:20:04 GMT
- Title: Deriving time-averaged active inference from control principles
- Authors: Eli Sennesh, Jordan Theriault, Jan-Willem van de Meent, Lisa Feldman
Barrett, Karen Quigley
- Abstract summary: Active inference offers a principled account of behavior as minimizing average sensory surprise over time.
We derive an infinite-horizon, average-surprise formulation of active inference from optimal control principles.
- Score: 6.625391013374865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active inference offers a principled account of behavior as minimizing
average sensory surprise over time. Applications of active inference to control
problems have heretofore tended to focus on finite-horizon or
discounted-surprise problems, despite deriving from the infinite-horizon,
average-surprise imperative of the free-energy principle. Here we derive an
infinite-horizon, average-surprise formulation of active inference from optimal
control principles. Our formulation returns to the roots of active inference in
neuroanatomy and neurophysiology, formally reconnecting active inference to
optimal feedback control. Our formulation provides a unified objective
functional for sensorimotor control and allows for reference states to vary
over time.
Related papers
- Active Inference in Discrete State Spaces from First Principles [0.0]
We seek to clarify the concept of active inference by disentangling it from the Free Energy Principle.<n>We show how the optimizations that need to be carried out in order to implement active inference in discrete state spaces can be formulated as constrained divergence minimization problems.
arXiv Detail & Related papers (2025-11-25T13:54:10Z) - Lyapunov Neural ODE Feedback Control Policies [6.165163123577486]
This paper presents a Lyapunov-NODE control (L-NODEC) approach to solving continuous-time optimal control problems.
We establish that L-NODEC ensures exponential stability of the controlled system, as well as its adversarial robustness to uncertain initial conditions.
arXiv Detail & Related papers (2024-08-31T08:59:18Z) - Active Inference Meeting Energy-Efficient Control of Parallel and Identical Machines [1.693200946453174]
We investigate the application of active inference in developing energy-efficient control agents for manufacturing systems.
Our study explores deep active inference, an emerging field that combines deep learning with the active inference decision-making framework.
arXiv Detail & Related papers (2024-06-13T17:00:30Z) - Growing Q-Networks: Solving Continuous Control Tasks with Adaptive Control Resolution [51.83951489847344]
In robotics applications, smooth control signals are commonly preferred to reduce system wear and energy efficiency.
In this work, we aim to bridge this performance gap by growing discrete action spaces from coarse to fine control resolution.
Our work indicates that an adaptive control resolution in combination with value decomposition yields simple critic-only algorithms that yield surprisingly strong performance on continuous control tasks.
arXiv Detail & Related papers (2024-04-05T17:58:37Z) - ACE : Off-Policy Actor-Critic with Causality-Aware Entropy Regularization [52.5587113539404]
We introduce a causality-aware entropy term that effectively identifies and prioritizes actions with high potential impacts for efficient exploration.
Our proposed algorithm, ACE: Off-policy Actor-critic with Causality-aware Entropy regularization, demonstrates a substantial performance advantage across 29 diverse continuous control tasks.
arXiv Detail & Related papers (2024-02-22T13:22:06Z) - Learning Control Policies of Hodgkin-Huxley Neuronal Dynamics [1.629803445577911]
We approximate the value function offline using a neural network to enable generating controls (stimuli) in real time via the feedback form.
Our numerical experiments illustrate the accuracy of our approach for out-of-distribution samples and the robustness to moderate shocks and disturbances in the system.
arXiv Detail & Related papers (2023-11-13T18:53:50Z) - A Neural Active Inference Model of Perceptual-Motor Learning [62.39667564455059]
The active inference framework (AIF) is a promising new computational framework grounded in contemporary neuroscience.
In this study, we test the ability for the AIF to capture the role of anticipation in the visual guidance of action in humans.
We present a novel formulation of the prior function that maps a multi-dimensional world-state to a uni-dimensional distribution of free-energy.
arXiv Detail & Related papers (2022-11-16T20:00:38Z) - Minimizing Control for Credit Assignment with Strong Feedback [65.59995261310529]
Current methods for gradient-based credit assignment in deep neural networks need infinitesimally small feedback signals.
We combine strong feedback influences on neural activity with gradient-based learning and show that this naturally leads to a novel view on neural network optimization.
We show that the use of strong feedback in DFC allows learning forward and feedback connections simultaneously, using a learning rule fully local in space and time.
arXiv Detail & Related papers (2022-04-14T22:06:21Z) - Inference of Affordances and Active Motor Control in Simulated Agents [0.5161531917413706]
We introduce an output-probabilistic, temporally predictive, modular artificial neural network architecture.
We show that our architecture develops latent states that can be interpreted as affordance maps.
In combination with active inference, we show that flexible, goal-directed behavior can be invoked.
arXiv Detail & Related papers (2022-02-23T14:13:04Z) - Neural optimal feedback control with local learning rules [67.5926699124528]
A major problem in motor control is understanding how the brain plans and executes proper movements in the face of delayed and noisy stimuli.
We introduce a novel online algorithm which combines adaptive Kalman filtering with a model free control approach.
arXiv Detail & Related papers (2021-11-12T20:02:00Z) - Active inference, Bayesian optimal design, and expected utility [1.433758865948252]
We describe how active inference combines Bayesian decision theory and optimal Bayesian design principles to minimize expected free energy.
It is this aspect of active inference that allows for the natural emergence of information-seeking behavior.
Our Tmaze simulations show optimizing expected free energy produces goal-directed information-seeking behavior while optimizing expected utility induces purely exploitive behavior.
arXiv Detail & Related papers (2021-09-21T20:56:32Z) - Adaptive Rational Activations to Boost Deep Reinforcement Learning [68.10769262901003]
We motivate why rationals are suitable for adaptable activation functions and why their inclusion into neural networks is crucial.
We demonstrate that equipping popular algorithms with (recurrent-)rational activations leads to consistent improvements on Atari games.
arXiv Detail & Related papers (2021-02-18T14:53:12Z) - Regularity and stability of feedback relaxed controls [4.48579723067867]
This paper proposes a relaxed control regularization with general exploration rewards to design robust feedback controls.
We show that both the value function and the feedback control of the regularized control problem are Lipschitz stable with respect to parameter perturbations.
arXiv Detail & Related papers (2020-01-09T18:24:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.