Emergence of Adaptive Circadian Rhythms in Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2307.12143v1
- Date: Sat, 22 Jul 2023 18:47:18 GMT
- Title: Emergence of Adaptive Circadian Rhythms in Deep Reinforcement Learning
- Authors: Aqeel Labash, Florian Fletzer, Daniel Majoral, Raul Vicente
- Abstract summary: Adapting to regularities of the environment is critical for biological organisms to anticipate events and plan.
We study the emergence of circadian-like rhythms in deep reinforcement learning agents.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Adapting to regularities of the environment is critical for biological
organisms to anticipate events and plan. A prominent example is the circadian
rhythm corresponding to the internalization by organisms of the $24$-hour
period of the Earth's rotation. In this work, we study the emergence of
circadian-like rhythms in deep reinforcement learning agents. In particular, we
deployed agents in an environment with a reliable periodic variation while
solving a foraging task. We systematically characterize the agent's behavior
during learning and demonstrate the emergence of a rhythm that is endogenous
and entrainable. Interestingly, the internal rhythm adapts to shifts in the
phase of the environmental signal without any re-training. Furthermore, we show
via bifurcation and phase response curve analyses how artificial neurons
develop dynamics to support the internalization of the environmental rhythm.
From a dynamical systems view, we demonstrate that the adaptation proceeds by
the emergence of a stable periodic orbit in the neuron dynamics with a phase
response that allows an optimal phase synchronisation between the agent's
dynamics and the environmental rhythm.
Related papers
- Neuron: Learning Context-Aware Evolving Representations for Zero-Shot Skeleton Action Recognition [64.56321246196859]
We propose a novel dyNamically Evolving dUal skeleton-semantic syneRgistic framework.
We first construct the spatial-temporal evolving micro-prototypes and integrate dynamic context-aware side information.
We introduce the spatial compression and temporal memory mechanisms to guide the growth of spatial-temporal micro-prototypes.
arXiv Detail & Related papers (2024-11-18T05:16:11Z) - A Simulation Environment for the Neuroevolution of Ant Colony Dynamics [0.0]
We introduce a simulation environment to facilitate research into emergent collective behaviour.
By leveraging real-world data, the environment simulates a target ant trail that a controllable agent must learn to replicate.
arXiv Detail & Related papers (2024-06-19T01:51:15Z) - Exploring neural oscillations during speech perception via surrogate gradient spiking neural networks [59.38765771221084]
We present a physiologically inspired speech recognition architecture compatible and scalable with deep learning frameworks.
We show end-to-end gradient descent training leads to the emergence of neural oscillations in the central spiking neural network.
Our findings highlight the crucial inhibitory role of feedback mechanisms, such as spike frequency adaptation and recurrent connections, in regulating and synchronising neural activity to improve recognition performance.
arXiv Detail & Related papers (2024-04-22T09:40:07Z) - Continuous Time Continuous Space Homeostatic Reinforcement Learning
(CTCS-HRRL) : Towards Biological Self-Autonomous Agent [0.12068041242343093]
Homeostasis is a process by which living beings maintain their internal balance.
Homeostatic Regulated Reinforcement Learning (HRRL) framework attempts to explain this learned homeostatic behaviour.
In this work, we advance the HRRL framework to a continuous time-space environment and validate the CTCS-HRRL framework.
arXiv Detail & Related papers (2024-01-17T06:29:34Z) - Persistent learning signals and working memory without continuous
attractors [6.135577623169029]
We show that quasi-periodic attractors can support learning arbitrarily long temporal relationships.
Our theory has broad implications for the design of artificial learning systems.
arXiv Detail & Related papers (2023-08-24T06:12:41Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Limits of Entrainment of Circadian Neuronal Networks [0.0]
Circadian rhythmicity lies at the center of various important physiological and behavioral processes in mammals.
We study a modern computational neuroscience model to determine the limits of circadian synchronization to external light signals of different frequency and duty cycle.
arXiv Detail & Related papers (2022-08-23T17:57:21Z) - Entanglement and correlations in fast collective neutrino flavor
oscillations [68.8204255655161]
Collective neutrino oscillations play a crucial role in transporting lepton flavor in astrophysical settings.
We study the full out-of-equilibrium flavor dynamics in simple multi-angle geometries displaying fast oscillations.
We present evidence that these fast collective modes are generated by the same dynamical phase transition.
arXiv Detail & Related papers (2022-03-05T17:00:06Z) - Continuous Homeostatic Reinforcement Learning for Self-Regulated
Autonomous Agents [0.0]
We propose an extension of the homeostatic reinforcement learning theory to a continuous environment in space and time.
Inspired by the self-regulating mechanisms abundantly present in biology, we also introduce a model for the dynamics of the agent internal state.
arXiv Detail & Related papers (2021-09-14T11:03:58Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Ecological Reinforcement Learning [76.9893572776141]
We study the kinds of environment properties that can make learning under such conditions easier.
understanding how properties of the environment impact the performance of reinforcement learning agents can help us to structure our tasks in ways that make learning tractable.
arXiv Detail & Related papers (2020-06-22T17:55:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.